Compressing Fingerprint Templates by Solving the K-Node Minimum Label Spanning Arborescence Problem by Branch-And-Price

Total Page:16

File Type:pdf, Size:1020Kb

Compressing Fingerprint Templates by Solving the K-Node Minimum Label Spanning Arborescence Problem by Branch-And-Price Compressing Fingerprint Templates by Solving the k-Node Minimum Label Spanning Arborescence Problem by Branch-and-Price DIPLOMARBEIT zur Erlangung des akademischen Grades Diplom-Ingenieurin im Rahmen des Studiums Software Engineering / Internet Computing (066/937) eingereicht von Corinna Th¨oni,BSc. Matrikelnummer 9826437 an der Fakult¨atf¨urInformatik der Technischen Universit¨atWien Institut f¨urComputergrafik und Algorithmen Betreuung: Univ.-Prof. Dipl.-Ing. Dr. techn. G¨untherRaidl Univ.Ass. Mag. Dipl.-Ing. Andreas Chwatal Wien, 05.02.2010 (Unterschrift Verfasserin) (Unterschrift Betreuer) Technische Universit¨atWien A-1040 Wien Karlsplatz 13 Tel. +43/(0)1/58801-0 http://www.tuwien.ac.at Compressing Fingerprint Templates by Solving the k-Node Minimum Label Spanning Arborescence Problem by Branch-and-Price MASTER'S THESIS written at the Institute for Computer Graphics and Algorithms Vienna University of Technology under supervision of Univ.-Prof. Dipl.-Ing. Dr. techn. G¨unther Raidl Univ.Ass. Mag. Dipl.-Ing. Andreas Chwatal by Corinna Th¨oni,BSc. Software Engineering / Internet Computing (066/937), 9826437 Vienna, 05.02.2010 Abstract This thesis deals with the development of exact algorithms based on mathematical programming techniques for the solution of a combinatorial optimization problem which emerged in the context of a new compression model recently developed at the Algorithms and Data Structures Group of the Institute of Computer Graphics and Algorithms at the Vienna University of Technology. This compression model is particularly suited for small sets of unordered and multidimensional data having its application background in the field of biometrics. More precisely, fingerprint data given as a set of characteristic points and associated properties should be compressed in order to be used as an additional security feature for e.g. passports by embedding the data into passport images by watermarking techniques. The considered model is based on the well known Minimum Label Spanning Tree Problem (MLST) with the objective to find a small set of labels, associated to the edges of the graph, inducing a valid spanning tree. Based on this solution an encoding of the considered points being more compact than their trivial representation can be derived. So far, heuristic and exact algo- rithms have been developed, all of them requiring a time-consuming preprocessing step. The goal of this work is to develop an improved exact algorithm carrying out the tasks of the preprocessing algorithm during the execution of the exact methods in an profitable way. Within this thesis the problem is solved by mixed integer programming techniques. For this purpose a new flow formulation is presented which can be directly solved by linear programming based branch-and-bound, or alternatively by branch-and-price, being the main approach of this thesis. Branch-and-price works by starting with a restricted model, and then iteratively adding new variables potentially improving the objective function value. These variables are determined within the pricing problem which has to be solved frequently during the overall solution process. For this purpose specialized data structures and algorithms, based on a k-d tree are used, with the intention to exploit possible structures of the input data. Within this tree, improving variables can be found by various traversing and bounding techniques. These algorithms can also be adapted in order to work as an alternative to the existing preprocessing. All algorithms have been implemented and comprehensively tested. For the existing ap- proaches, the new methods reduced the preprocessing run times with a factor of 50 and use now 2% of the original run times. With the presented branch-and-price approach it is possible to solve a greater amount of test instances to optimality than previously. Generally, for most model parameters the branch-and-price approach outperforms the previous exact method. i Zusammenfassung Diese Diplomarbeit besch¨aftigt sich mit der Entwicklung von exakten Algorithmen, welche auf mathematischer Programmierung basieren. Ziel ist die L¨osung eines kombinatorischen Optimie- rungsproblems, welches im Kontext eines neuen Komprimierungsverfahren auftritt, das kurzlich¨ vom Arbeitsbereich fur¨ Algorithmen und Datenstrukturen des Institut fur¨ Computergraphik und Algorithmen an der Technischen Universit¨at Wien entwickelt wurde. Dieses Komprimierungsver- fahren ist insbesondere geeignet fur¨ kleine Mengen von ungeordneten und multidimensionalen Daten, wobei biometrische Applikationen den Anwendungshintergrund bilden. Insbesondere sol- len Fingerabdruckdaten, sogenannte Minutien, gegeben als Menge charakteristischer Punkte mit zugeordneten Eigenschaften, komprimiert werden, um als zus¨atzliches Sicherheitsmerkmal fur¨ z.B. Reisep¨asse verwendet werden zu k¨onnen, indem die Daten als Wasserzeichen in das Passbild ein- gebettet werden. Das betrachtete Modell basiert auf dem bekannten Minimum Label Spanning Tree Problem (MLST). Das Ziel des Problems ist es, eine kleine Menge an Labels zu finden, welche den Kanten des Graphen zugeordnet sind und so einen gultigen¨ Spannbaum induzieren. Basierend auf dieser L¨osung kann eine Kodierung der betrachteten Punkte abgeleitet werden, welche kompakter als ihre triviale Darstellung ist. Bislang wurden Heuristiken und ein exaktes Verfahren entwickelt, welche alle einen laufzeitintensiven Preprocessing-Schritt voraussetzen. Das Ziel dieser Arbeit ist es, einen verbesserten exakten Algorithmus zu entwickeln, welcher die Aufgaben des Preprocessing-Schritts in einer vorteilhaften Weise w¨ahrend der Ausfuhrung¨ der exakten Methode erledigt. In dieser Arbeit wurde das Problem mit Mixed Integer Programmierungstechniken gel¨ost. Zu diesem Zweck wird eine neue Flußnetzwerk Formulierung vorgestellt, welche direkt mittels Branch- and-Bound, basierend auf Linearer Programmierung, gel¨ost werden kann. Alternativ dazu gelingt die L¨osung auch mittels Branch-and-Price, welches den Hauptansatz darstellt. Branch-and-Price beginnt mit einem reduzierten Problem und fugt¨ dann schrittweise neue Variablen, welche den Zielfunktionswert verbessern k¨onnen, diesem reduzierten Problem hinzu. Diese Variablen werden mittels des Pricing Problems bestimmt, welches oftmals w¨ahrend des L¨osungsvorgangs berechnet werden muss. Zu diesem Zweck wurden spezialisierte, auf einem k-d Tree basierende Datenstruk- turen und Algorithmen entwickelt, welche m¨ogliche Strukturen der Eingabedaten in geschickter Weise ausnutzen.¨ Mittels verschiedener Traversierungs- und Boundingtechniken k¨onnen aus dieser Baumdatenstruktur verbessernde Variablen effizient extrahiert werden. Weiters k¨onnen die entwi- ckelten Algorithmen zu einer Alternative fur¨ den Preprocessing-Schritt adaptiert werden. Alle Algorithmen wurden implementiert und umfangreich getestet. Fur¨ die bestehenden Ans¨atze wurde die Zeit, die fur¨ den ursprunglichen¨ Vorberechnungsschritt gebraucht wurde, um Faktor 50 reduziert, die Laufzeiten sinken auf 2% der ursprunglichen¨ Zeit. Mit dem vorgestellten Branch- and-Price Verfahren ist es m¨oglich eine gr¨oßere Anzahl an Testinstanzen optimal zu l¨osen als bisher. Fur¨ die meisten Modellparameter ubertrifft¨ der Branch-and-Price Ansatz die bisherige exakte Methode. ii Acknowledgements Ich m¨ochte allen Personen danken, die mir den Abschluß dieser Diplomarbeit erm¨oglicht haben. Ich danke meinen Eltern Robert und Johanna sowie meinem Bruder Rudolf, die viel Geduld und Verst¨andnis fur¨ mich aufgebracht und mich immer wieder zum Weitermachen motiviert haben. Ich danke Dieter, der mich in Allem unterstutzt¨ und bekr¨aftigt hat. Danke an Andreas Chwatal, dessen exzellente fachliche Kompetenz und unendliche Geduld diese Arbeit m¨oglich gemacht haben. Danke an Gunther¨ Raidl fur¨ die großartigen Ideen und die viele Unterstutzung.¨ Danke an den gesamten Fachbereich fur¨ Algorithmen und Datenstrukturen fur¨ die gute technische Infrastruktur. iii Erkl¨arung zur Verfassung Hiermit erkl¨are ich, dass ich diese Arbeit selbst¨andig verfasst habe, dass ich die verwendeten Quel- len und Hilfsmittel vollst¨andig angegeben habe und dass ich die Stellen der Arbeit, einschließlich Tabellen, Karten und Abbildungen, die anderen Werken oder dem Internet im Wortlaut oder dem Sinn nach entnommen sind, auf jeden Fall unter Angabe der Quelle als Entlehnung kenntlich ge- macht habe. Wien, 05.02.2010 iv Contents Abstract . .i Zusammenfassung . ii Acknowledgements . iii 1 Introduction 1 1.1 Biometric Background . .2 1.1.1 Dactyloscopy . .3 1.1.2 Digital Watermarking, Steganography and Compression . .4 1.2 Previous Work . .5 1.2.1 The Compression Model . .5 1.2.2 Determining the Codebook . .6 1.3 Contributions of this Thesis . .7 2 Graph Theory and Algorithmic Geometry Essentials 9 2.1 Graph Theory Basics . .9 2.1.1 Minimum Label Spanning Tree (MLST) . 10 2.1.2 k-Cardinality Tree (k-CT) . 10 2.1.3 k-node Minimum Label Spanning Arborescence (k-MLSA) . 11 2.1.4 Flow networks . 11 2.2 Algorithmic Graph Theory Essentials . 12 2.3 Algorithmic Geometry . 12 2.3.1 Binary Search Tree . 12 2.3.2 2-d Tree and k-d Tree . 12 3 Optimization Theory Essentials 15 3.1 History and Trivia . 15 3.2 Introduction to Linear Optimization . 16 3.2.1 Linear Programs . 16 3.2.2 Duality . 17 3.2.3 Polyhedral Theory, Solvability and Degeneration . 18 3.2.3.1 Farkas' Lemma . 19 3.2.4 Solution Methods for Linear Programs . 19 3.2.4.1 Geometric Method . 20 3.2.4.2 Simplex Algorithm . 20 3.2.4.3 Nonlinear Optimization Techniques . 21 3.3 Integer Optimization .
Recommended publications
  • Advances in Interior Point Methods and Column Generation
    Advances in Interior Point Methods and Column Generation Pablo Gonz´alezBrevis Doctor of Philosophy University of Edinburgh 2013 Declaration I declare that this thesis was composed by myself and that the work contained therein is my own, except where explicitly stated otherwise in the text. (Pablo Gonz´alezBrevis) ii To my endless inspiration and angels on earth: Paulina, Crist´obal and Esteban iii Abstract In this thesis we study how to efficiently combine the column generation technique (CG) and interior point methods (IPMs) for solving the relaxation of a selection of integer programming problems. In order to obtain an efficient method a change in the column generation technique and a new reoptimization strategy for a primal-dual interior point method are proposed. It is well-known that the standard column generation technique suffers from un- stable behaviour due to the use of optimal dual solutions that are extreme points of the restricted master problem (RMP). This unstable behaviour slows down column generation so variations of the standard technique which rely on interior points of the dual feasible set of the RMP have been proposed in the literature. Among these tech- niques, there is the primal-dual column generation method (PDCGM) which relies on sub-optimal and well-centred dual solutions. This technique dynamically adjusts the column generation tolerance as the method approaches optimality. Also, it relies on the notion of the symmetric neighbourhood of the central path so sub-optimal and well-centred solutions are obtained. We provide a thorough theoretical analysis that guarantees the convergence of the primal-dual approach even though sub-optimal solu- tions are used in the course of the algorithm.
    [Show full text]
  • Chapter 8 Constrained Optimization 2: Sequential Quadratic Programming, Interior Point and Generalized Reduced Gradient Methods
    Chapter 8: Constrained Optimization 2 CHAPTER 8 CONSTRAINED OPTIMIZATION 2: SEQUENTIAL QUADRATIC PROGRAMMING, INTERIOR POINT AND GENERALIZED REDUCED GRADIENT METHODS 8.1 Introduction In the previous chapter we examined the necessary and sufficient conditions for a constrained optimum. We did not, however, discuss any algorithms for constrained optimization. That is the purpose of this chapter. The three algorithms we will study are three of the most common. Sequential Quadratic Programming (SQP) is a very popular algorithm because of its fast convergence properties. It is available in MATLAB and is widely used. The Interior Point (IP) algorithm has grown in popularity the past 15 years and recently became the default algorithm in MATLAB. It is particularly useful for solving large-scale problems. The Generalized Reduced Gradient method (GRG) has been shown to be effective on highly nonlinear engineering problems and is the algorithm used in Excel. SQP and IP share a common background. Both of these algorithms apply the Newton- Raphson (NR) technique for solving nonlinear equations to the KKT equations for a modified version of the problem. Thus we will begin with a review of the NR method. 8.2 The Newton-Raphson Method for Solving Nonlinear Equations Before we get to the algorithms, there is some background we need to cover first. This includes reviewing the Newton-Raphson (NR) method for solving sets of nonlinear equations. 8.2.1 One equation with One Unknown The NR method is used to find the solution to sets of nonlinear equations. For example, suppose we wish to find the solution to the equation: xe+=2 x We cannot solve for x directly.
    [Show full text]
  • A Branch-And-Price Approach with Milp Formulation to Modularity Density Maximization on Graphs
    A BRANCH-AND-PRICE APPROACH WITH MILP FORMULATION TO MODULARITY DENSITY MAXIMIZATION ON GRAPHS KEISUKE SATO Signalling and Transport Information Technology Division, Railway Technical Research Institute. 2-8-38 Hikari-cho, Kokubunji-shi, Tokyo 185-8540, Japan YOICHI IZUNAGA Information Systems Research Division, The Institute of Behavioral Sciences. 2-9 Ichigayahonmura-cho, Shinjyuku-ku, Tokyo 162-0845, Japan Abstract. For clustering of an undirected graph, this paper presents an exact algorithm for the maximization of modularity density, a more complicated criterion to overcome drawbacks of the well-known modularity. The problem can be interpreted as the set-partitioning problem, which reminds us of its integer linear programming (ILP) formulation. We provide a branch-and-price framework for solving this ILP, or column generation combined with branch-and-bound. Above all, we formulate the column gen- eration subproblem to be solved repeatedly as a simpler mixed integer linear programming (MILP) problem. Acceleration tech- niques called the set-packing relaxation and the multiple-cutting- planes-at-a-time combined with the MILP formulation enable us to optimize the modularity density for famous test instances in- cluding ones with over 100 vertices in around four minutes by a PC. Our solution method is deterministic and the computation time is not affected by any stochastic behavior. For one of them, column generation at the root node of the branch-and-bound tree arXiv:1705.02961v3 [cs.SI] 27 Jun 2017 provides a fractional upper bound solution and our algorithm finds an integral optimal solution after branching. E-mail addresses: (Keisuke Sato) [email protected], (Yoichi Izunaga) [email protected].
    [Show full text]
  • Cancer Treatment Optimization
    CANCER TREATMENT OPTIMIZATION A Thesis Presented to The Academic Faculty by Kyungduck Cha In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology April 2008 CANCER TREATMENT OPTIMIZATION Approved by: Professor Eva K. Lee, Advisor Professor Renato D.C. Monteiro H. Milton Stewart School of Industrial H. Milton Stewart School of Industrial and Systems Engineering and Systems Engineering Georgia Institute of Technology Georgia Institute of Technology Professor Earl R. Barnes Professor Nolan E. Hertel H. Milton Stewart School of Industrial G. W. Woodruff School of Mechanical and Systems Engineering Engineering Georgia Institute of Technology Georgia Institute of Technology Professor Ellis L. Johnson Date Approved: March 28, 2008 H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology To my parents, wife, and son. iii ACKNOWLEDGEMENTS First of all, I would like to express my sincere appreciation to my advisor, Dr. Eva Lee for her invaluable guidance, financial support, and understanding throughout my Ph.D program of study. Her inspiration provided me many ideas in tackling with my thesis problems and her enthusiasm encouraged me to produce better works. I would also like to thank Dr. Earl Barnes, Dr. Ellis Johnson, Dr. Renato D.C. Monteiro, and Dr. Nolan Hertel for their willingness to serve on my dissertation committee and for their valuable feedbacks. I also thank my friends, faculty, and staff, at Georgia Tech, who have contributed to this thesis in various ways. Specially, thank you to the members at the Center for Operations Research in Medicine and HealthCare for their friendship.
    [Show full text]
  • Optimal Placement by Branch-And-Price
    Optimal Placement by Branch-and-Price Pradeep Ramachandaran1 Ameya R. Agnihotri2 Satoshi Ono2;3;4 Purushothaman Damodaran1 Krishnaswami Srihari1 Patrick H. Madden2;4 SUNY Binghamton SSIE1 and CSD2 FAIS3 University of Kitakyushu4 Abstract— Circuit placement has a large impact on all aspects groups of up to 36 elements. The B&P approach is based of performance; speed, power consumption, reliability, and cost on column generation techniques and B&B. B&P has been are all affected by the physical locations of interconnected applied to solve large instances of well known NP-Complete transistors. The placement problem is NP-Complete for even simple metrics. problems such as the Vehicle Routing Problem [7]. In this paper, we apply techniques developed by the Operations We have tested our approach on benchmarks with known Research (OR) community to the placement problem. Using optimal configurations, and also on problems extracted from an Integer Programming (IP) formulation and by applying a the “final” placements of a number of recent tools (Feng Shui “branch-and-price” approach, we are able to optimally solve 2.0, Dragon 3.01, and mPL 3.0). We find that suboptimality is placement problems that are an order of magnitude larger than those addressed by traditional methods. Our results show that rampant: for optimization windows of nine elements, roughly suboptimality is rampant on the small scale, and that there is half of the test cases are suboptimal. As we scale towards merit in increasing the size of optimization windows used in detail windows with thirtysix elements, we find that roughly 85% of placement.
    [Show full text]
  • Branch-And-Bound Experiments in Convex Nonlinear Integer Programming
    Noname manuscript No. (will be inserted by the editor) More Branch-and-Bound Experiments in Convex Nonlinear Integer Programming Pierre Bonami · Jon Lee · Sven Leyffer · Andreas W¨achter September 29, 2011 Abstract Branch-and-Bound (B&B) is perhaps the most fundamental algorithm for the global solution of convex Mixed-Integer Nonlinear Programming (MINLP) prob- lems. It is well-known that carrying out branching in a non-simplistic manner can greatly enhance the practicality of B&B in the context of Mixed-Integer Linear Pro- gramming (MILP). No detailed study of branching has heretofore been carried out for MINLP, In this paper, we study and identify useful sophisticated branching methods for MINLP. 1 Introduction Branch-and-Bound (B&B) was proposed by Land and Doig [26] as a solution method for MILP (Mixed-Integer Linear Programming) problems, though the term was actually coined by Little et al. [32], shortly thereafter. Early work was summarized in [27]. Dakin [14] modified the branching to how we commonly know it now and proposed its extension to convex MINLPs (Mixed-Integer Nonlinear Programming problems); that is, MINLP problems for which the continuous relaxation is a convex program. Though a very useful backbone for ever-more-sophisticated algorithms (e.g., Branch- and-Cut, Branch-and-Price, etc.), the basic B&B algorithm is very elementary. How- Pierre Bonami LIF, Universit´ede Marseille, 163 Av de Luminy, 13288 Marseille, France E-mail: [email protected] Jon Lee Department of Industrial and Operations Engineering, University
    [Show full text]
  • The Central Curve in Linear Programming
    THE CENTRAL CURVE IN LINEAR PROGRAMMING JESUS´ A. DE LOERA, BERND STURMFELS, AND CYNTHIA VINZANT Abstract. The central curve of a linear program is an algebraic curve specified by linear and quadratic constraints arising from complementary slackness. It is the union of the various central paths for minimizing or maximizing the cost function over any region in the associated hyperplane arrangement. We determine the degree, arithmetic genus and defining prime ideal of the central curve, thereby answering a question of Bayer and Lagarias. These invariants, along with the degree of the Gauss image of the curve, are expressed in terms of the matroid of the input matrix. Extending work of Dedieu, Malajovich and Shub, this yields an instance-specific bound on the total curvature of the central path, a quantity relevant for interior point methods. The global geometry of central curves is studied in detail. 1. Introduction We consider the standard linear programming problem in its primal and dual formulation: (1) Maximize cT x subject to Ax = b and x ≥ 0; (2) Minimize bT y subject to AT y − s = c and s ≥ 0: Here A is a fixed matrix of rank d having n columns. The vectors c 2 Rn and b 2 image(A) may vary. In most of our results we assume that b and c are generic. This implies that both the primal optimal solution and the dual optimal solution are unique. We also assume that the problem (1) is bounded and both problems (1) and (2) are strictly feasible. Before describing our contributions, we review some basics from the theory of linear pro- gramming [26,33].
    [Show full text]
  • IMPROVED PARTICLE SWARM OPTIMIZATION for NON-LINEAR PROGRAMMING PROBLEM with BARRIER METHOD Raju Prajapati1 1University of Engineering & Management, Kolkata, W.B
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Gyandhara International Academic Publication (GIAP): Journals International Journal of Students’ Research in Technology & Management eISSN: 2321-2543, Vol 5, No 4, 2017, pp 72-80 https://doi.org/10.18510/ijsrtm.2017.5410 IMPROVED PARTICLE SWARM OPTIMIZATION FOR NON-LINEAR PROGRAMMING PROBLEM WITH BARRIER METHOD Raju Prajapati1 1University Of Engineering & Management, Kolkata, W.B. India Om Prakash Dubey2, Randhir Kumar3 2,3Bengal College Of Engineering & Technology, Durgapur, W.B. India email: [email protected] Article History: Submitted on 10th September, Revised on 07th November, Published on 30th November 2017 Abstract: The Non-Linear Programming Problems (NLPP) are computationally hard to solve as compared to the Linear Programming Problems (LPP). To solve NLPP, the available methods are Lagrangian Multipliers, Sub gradient method, Karush-Kuhn-Tucker conditions, Penalty and Barrier method etc. In this paper, we are applying Barrier method to convert the NLPP with equality constraint to an NLPP without constraint. We use the improved version of famous Particle Swarm Optimization (PSO) method to obtain the solution of NLPP without constraint. SCILAB programming language is used to evaluate the solution on sample problems. The results of sample problems are compared on Improved PSO and general PSO. Keywords. Non-Linear Programming Problem, Barrier Method, Particle Swarm Optimization, Equality and Inequality Constraints. INTRODUCTION A large number of algorithms are available to solve optimization problems, inspired by evolutionary process in nature. e.g. Genetic Algorithm, Particle Swarm Optimization, Ant Colony Optimization, Bee Colony Optimization, Memetic Algorithm, etc.
    [Show full text]
  • Barrier Method
    Barrier Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: Newton's method Consider the problem min f(x) x n for f convex, twice differentiable, with dom(f) = R . Newton's (0) n method: choose initial x R , repeat 2 (k) (k−1) 2 (k−1) −1 (k−1) x = x tk f(x ) f(x ); k = 1; 2; 3;::: − r r Step sizes tk chosen by backtracking line search If f Lipschitz, f strongly convex, 2f Lipschitz, then Newton's r r method has a local convergence rate O(log log(1/)) Downsides: Requires solving systems in Hessian quasi-Newton • Can only handle equality constraints this lecture • 2 Hierarchy of second-order methods Assuming all problems are convex, you can think of the following hierarchy that we've worked through: Quadratic problems are the easiest: closed-form solution • Equality-constrained quadratic problems are still easy: we use • KKT conditions to derive closed-form solution Equality-constrained smooth problems are next: use Newton's • method to reduce this to a sequence of equality-constrained quadratic problems Inequality- and equality-constrained smooth problems are • what we cover now: use interior-point methods to reduce this to a sequence of equality-constrained smooth problems 3 Log barrier function Consider the convex optimization problem min f(x) x subject to hi(x) 0; i = 1; : : : m ≤ Ax = b We will assume that f, h1; : : : hm are convex, twice differentiable, n each with domain R . The function m φ(x) = log( hi(x)) − − i=1 X is called the log barrier for the above problem.
    [Show full text]
  • Electrical Networks, Hyperplane Arrangements and Matroids By
    Electrical Networks, Hyperplane Arrangements and Matroids by Robert P. Lutz A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Mathematics) in The University of Michigan 2019 Doctoral Committee: Professor Jeffrey Lagarias, Chair Professor Alexander Barvinok Associate Professor Chris Peikert Professor Karen Smith Professor David Speyer Robert P. Lutz a.k.a. Bob Lutz [email protected] orcid.org/0000-0002-5407-7041 c Robert P. Lutz 2019 Dedication This dissertation is dedicated to my parents. ii Acknowledgments I thank Trevor Hyde, Thomas Zaslavsky, and anonymous referees for helpful com- ments on individual chapters of this thesis. I thank Alexander Barvinok for his careful reading of an early version of the thesis, and many useful comments. Finally I thank my advisor, Jeffrey Lagarias, for his extensive comments on the thesis, for his keen advice and moral support, and for many helpful conversations throughout my experience as a graduate student. This work was partially supported by NSF grants DMS-1401224 and DMS-1701576. iii Table of Contents Dedication ii Acknowledgments iii List of Figures vii Abstract ix Chapter 1: Introduction 1 Chapter 2: Background 4 2.1 Electrical networks . 4 2.2 Hyperplane arrangements . 10 2.2.1 Supersolvable arrangements . 13 2.2.2 Graphic arrangements . 14 2.3 Matroids . 15 2.3.1 Graphic matroids . 17 2.3.2 Complete principal truncations . 17 Chapter 3: Electrical networks and hyperplane arrangements 19 3.1 Main definition and examples . 19 3.2 Main results . 24 3.3 Combinatorics of Dirichlet arrangements . 26 3.3.1 Intersection poset and connected partitions .
    [Show full text]
  • Problem Set 7 Convex Optimization (Winter 2018)
    Problem set 7 Convex Optimization (Winter 2018) Due Friday, February 23rd at 5pm • Upload both a pdf writeup of your work and a zip file containing any supplemental files to Canvas by the due date. Submissions without a non-zipped pdf writeup won’t be graded. • Late homeworks will be penalized 25% per day and will not be accepted after the next Monday at 5pm unless you have made arrangements with us well before the due date. We strongly encourage you to typeset your solutions. • You may collaborate with other students to complete this assignment, but all submitted work should be yours. When you go to write down your solutions, please do so on your own. Description In this assignment, you will experiment with the log-barrier method. Partiallly completed Python code is included with the assignment. Areas that you need to fill in are marked with “TODO” comments. You should turn in an archive containing all of your source code, and a separate pdf document containing all plots and answers to underlined questions. 1 Log-Barrier Function Most of the functions in the file “algorithms.py” are the same as in previous programming as- signments (but filled-in this time). The objective log barrier function in “algorithms.py” takes as parameters an objective function f0, a constraint function φ (which has the same calling convention as the objective function), a vector x, and a scaling parameter t, and returns the value, gradient and Hessian of the log-barrier objective: f(x; t) = tf0(x) + φ(x) 1 1.1 Implementation The function quadratic log barrier in “functions.py” is supposed to compute the value, gradient, and Hessian of: 1 X f(x; t) = t x>Qx + v>x − log (a x + b ) 2 i;: i i This is the log-barrier objective function f(x; t) = tf0(x) + φ(x) for a quadratic objective f0, log-barrier function φ, and linear constraints Ax + b ≥ 0.
    [Show full text]
  • Price-Based Distributed Optimization in Large-Scale Networked Systems
    Price-Based Distributed Optimization in Large-Scale Networked Systems Dissertation document submitted to the Division of Research and Advanced Studies of the University of Cincinnati in partial fulfillment of the requirements of degree of Doctor of Philosophy in the School of Dynamic Systems College of Engineering and Applied Sciences by Baisravan HomChaudhuri Summer 2013 MS, University of Cincinnati, 2010 BE, Jadavpur University, 2007 Committee Chair: Dr. Manish Kumar c 2013 Baisravan HomChaudhuri, All Rights Reserved Abstract This work is intended towards the development of distributed optimization methods for large-scale complex networked systems. The advancement in tech- nological fields such as networking, communication and computing has facili- tated the development of networks which are massively large-scale in nature. The examples of such systems include power grid, communication network, Internet, large manufacturing plants, and cloud computing clusters. One of the important challenges in these networked systems is the evaluation of the optimal point of operation of the system. Most of the centralized optimiza- tion algorithms suffer from the curse of dimensionality that raises the issue of scalability of algorithms for large-scale systems. The problem is essentially challenging not only due to high-dimensionality of the problem, but also due to distributed nature of resources, lack of global information and uncertain and dynamic nature of operation of most of these systems. The inadequacies of the traditional centralized optimization
    [Show full text]