Local Search and Restart Strategies for Satisfiability Solving in Fuzzy

Total Page:16

File Type:pdf, Size:1020Kb

Local Search and Restart Strategies for Satisfiability Solving in Fuzzy Local Search and Restart Strategies for Satisfiability Solving in Fuzzy Logics Tim Brys∗, Madalina M. Drugan∗, Peter A.N. Bosmany, Martine De Cockx and Ann Nowe´∗ ∗Artificial Intelligence Lab, VUB Pleinlaan 2, 1050 Brussels, Belgium timbrys, mdrugan, anowe @vub.ac.be f g yCentrum Wiskunde & Informatica (CWI) P.O. Box 94079, 1090 GB Amsterdam, The Netherlands [email protected] xDept. of Applied Math. and Comp. Sc. UGent Krijgslaan 281 (S9), 9000 Gent, Belgium [email protected] Abstract—Satisfiability solving in fuzzy logics is a subject that as not suffering from the complexity issues associated with an- has not been researched much, certainly compared to satisfiability alytical solvers, as our results showed. The disadvantage of this in propositional logics. Yet, fuzzy logics are a powerful tool approach is that only given an optimization algorithm proven for modelling complex problems. Recently, we proposed an optimization approach to solving satisfiability in fuzzy logics and to converge to the global optimum can it be a complete solver, compared the standard Covariance Matrix Adaptation Evolution i.e. deciding both SAT and UNSAT . The optimization 1 1 Strategy algorithm (CMA-ES) with an analytical solver on a set of algorithm used in [12] was the Covariance Matrix Adaptation benchmark problems. Especially on more finegrained problems Evolution Strategy (CMA-ES)[13], which is considered to did CMA-ES compare favourably to the analytical approach. be state-of-the-art in black-box optimization on a continuous In this paper, we evaluate two types of hillclimber in addition to CMA-ES, as well as restart strategies for these algorithms. domain. In this paper we explore some alternatives to this Our results show that a population-based hillclimber outperforms algorithm as the core for this SAT solver. Our findings are 1 CMA-ES on the harder problem class. somewhat surprising in that CMA-ES is outperformed by a hillclimber on one problem class. I. INTRODUCTION The rest of the paper is structured as follows: in Section The problem of SAT in propositional logic [1] is well known II, we give a brief description of fuzzy logics and formally and is of interest to researchers from various domains [2], [3], define SAT as an optimization problem. Section III contains 1 as many problems can be reformulated as a SAT problem and the description of a hillclimber algorithm (III-A), a population- subsequently solved by a state-of-the-art SAT solver. based version of that algorithm (III-B) and CMA-ES (III-C), as In fuzzy logics – logics with an infinite number of truth well as restart strategies for these algorithms. These algorithms degrees – the same principle of satisfiability exists, SAT , are then evaluated and compared on a set of benchmark 1 and it is, like its classical counterpart, useful for solving problems in Section IV. a variety of problems. Indeed, many fuzzy reasoning tasks can be reduced to SAT , including reasoning about vague II. FUZZY LOGIC AND SAT 1 1 concepts in the context of the semantic web [4], fuzzy spatial In fuzzy logics [14], truth is expressed as a real number reasoning [5] and fuzzy answer set programming [6], which in taken from the unit interval [0; 1]. Essentially, there are an itself is an important framework for non-monotonic reasoning infinite number of truth degrees possible. A formula in a fuzzy over continuous domains (see e.g. [7], [8], [9]). logic is built from a set of variables , constants taken from In contrast to SAT, SAT has received much less attention V 1 [0; 1] and n-ary connectives for n N. An interpretation is 2 from the various research communities; there are only two a mapping : [0; 1] that maps every variable to a truth I V! types of analytical solvers to be found in the literature, one degree. We can extend this fuzzy interpretation to formulas I using mixed integer programming [10], and one using a con- as follows: straint satisfaction approach [11]. Both methods suffer from For each constant c in [0; 1], [c] = c. exponential complexity issues associated respectively with • I For each variable v in , [v] = (v). mixed integer programming itself and an iteratively refining • V I I Each n-ary connective f is interpreted by a function f : discretization. • [0; 1]n [0; 1]. Furthermore we define Recently, we proposed a new approach to solving SAT ! 1 [12] that models satisfiability as an optimization problem [f(α1; : : : ; αn)] = f([α1] ;:::; [αn] ) over a continuous domain, and has the advantage of being I I I independent of the underlying logic and its operators, as well for formulas α with 1 i n. i ≤ ≤ [0, 1]n [0, 1]. Furthermore we define 1 ! [0, 1]n [0, 1]. Furthermore we define ! 1 [f(↵1,...,↵n)] = f([↵1] ,...,[↵n] ) ) I I I i α f ( [f(↵1,...,↵n)] = ([↵1] ,...,[↵n] ) I ) f I I I i α ( I for formulas ↵i with 1 i n. f for formulas↵ with 1 i n. The connectives in fuzzy logicsi typically correspond to con- The connectives in fuzzy logics typically correspond to con- 0 l u 1 nectives from classical logic, such as conjunction, disjunction, [α ] n 0 l i I u 1 nectives from classical logic, such as conjunction, disjunction, [α ] implication[0 and, 1] negation,[0, 1] which. Furthermore are interpreted we define respectively i I implication! and negation, which are interpreted respectively 1 by a t-norm, a t-conorm,n an implicator and a negator. A Fig. 1. f (↵i) for a formula ↵i with lower bound l, upper bound u and by a[0 t-norm,, 1][f(↵1,..., a[0 t-conorm,,↵1]n)] = anf implicator([↵1] ,..., and[↵n a] negator.) A I Fig. 1. f (↵i) for a formula ↵i with lower bound l, upper bound u and . Furthermore we define ) triangular norm or t-norm T is anI increasing,I associativeI degree of satisfaction [↵Ii] .i 1 α degree of satisfactionI( [↵ ] . triangular norm! or t-norm T is an increasing, associative I i 2 f and commutative [0, 1] [0, 1] 2mapping that satisfies the I forand formulas commutative↵i with[0, 1]1 i[0, 1]n.mapping that satisfies the ![f(↵1,...,↵n)] = f([↵1] ,...,[↵n] ) boundary condition T(1,x)=x for! all x in [0, 1]. Simi- ) Theboundary connectives condition inT fuzzy(1,x)= logicsx Ifor typically all x in correspond[0I , 1]. Simi- toI con- i α The connectives in fuzzy logics typically correspond to con- ( formula. Given these n formulas ↵ , boundsI (u ,l ), and an larly, a triangular conorm or t-conorm S is an increasing, i f i i nectiveslarly, a from triangular classical conorm logic, or such t-conorm as conjunction,S is an increasing, disjunction,formula. Given0 these1 nl formulas ↵i, boundsu (ui,li), and1 an nectives fromfor formulasclassical logic,↵i2 with such as1 conjunction,2 i n. disjunction,interpretation , we define the objective function[α ] f as follows: associative and commutative [0, 1] [0[0,,1] mapping[0, 1] that interpretation , we define the objectivei I function f as follows: implicationassociative and and negation,commutative! which are interpretedmapping respectively that I I satisfiesimplication the boundary and condition negation,S(0 which,x)= arex. interpreted An! implicator respectively bysatisfiesThe a t-norm, connectives the boundary a t-conorm, in condition fuzzy anS logics implicator(0,x)= typicallyx. Anand implicator a correspond negator. A to con- n f (↵ n,l ,u ) by a2 t-norm, a2 t-conorm, an implicator and a negator. A i=1 ii=1i f i(↵i,li,ui) I is a [0, 1]I is a[0[0, 1], 1]mapping[0, 1] thatmapping is decreasing that is decreasing in its first in its first Fig. 1.f(f)=(↵i) ffor( a)= formulaI ↵i0Iwith lowerl bound (2)l, upper bound(2) u u and 1 nectives! from classical logic,T such as conjunction, disjunction,I [α ] triangulartriangular norm norm or! t-norm or t-normT is anis increasing, an increasing, associative associativedegree ofI satisfactionI[↵i]n. n i I argument, increasingargument, in increasing its second in argument2 its second and argument that satisfies and that satisfies P IP the propertiesandandimplication commutativetheI commutative(0 properties, 0) = I(0 andI[0(0,,1)1], [00)2 negation,=; =1]I(1I(0[0, 1),,1)1] =[0 = 1whichmapping;Iand1](1,mapping1)I(1 = are, 0) that1 and = interpreted satisfies0 thatI.(1,and,0) satisfies = the 0. respectivelyand, the ! ! boundaryboundarybyNA a negator condition t-norm, conditionN isT a(1 decreasing[0 t-conorm,,xT, (11])=; x)x[0 =[0,for,1]1] anx allfor implicator[0 x all, 1] in xmapping[0,in1] and.[0 Simi-; that1] a. Simi- negator. A A negator is a decreasing mapping that Fig.0 1. f (↵i) for a formula ↵i with lower bound l,1 upper bound u and ! ! 1 if li [↵i] ui satisfies N(0) = 1 and N(1) = 0. S formula. Given1 these nifIformulasli [↵i] ↵iu,i bounds (ui,li), and an satisfieslarly,Nlarly,(0)triangular a = triangular a 1 and triangularN norm(1) conorm = 0 conorm. or or t-norm t-conorm or t-conormT isS is an an increasing,is increasing, an increasing, associative degree of[↵ satisfaction] [↵i] . I [↵i] i I 2 2 f (↵i,li,ui)= I if [↵i] I<li (3) As an exampleAs an of example a particularly of a particularly popular2 [0; fuzzy1] popular logic,[0 fuzzy; in1] logic, in f (interpretation↵i,li,ui)= , weI define8 ifli[↵ thei] <l objectivei function(3) f as follows: associativeassociative and and commutative commutative[0, 1] [0, 1] mappingmapping that I that I 8 li I and commutative [0, 1] [0, 1] mapping that satisfies the 1I [↵ ] > 1 [↵i] I Łukasiewicz logic, negation , conjunction! ! , disjunction > i < − I if [↵i] >ui Łukasiewiczsatisfiessatisfies logic, the the boundarynegation boundary, condition conjunction conditionS!(0,x,S disjunction)=(0; xx).
Recommended publications
  • A Branch-And-Price Approach with Milp Formulation to Modularity Density Maximization on Graphs
    A BRANCH-AND-PRICE APPROACH WITH MILP FORMULATION TO MODULARITY DENSITY MAXIMIZATION ON GRAPHS KEISUKE SATO Signalling and Transport Information Technology Division, Railway Technical Research Institute. 2-8-38 Hikari-cho, Kokubunji-shi, Tokyo 185-8540, Japan YOICHI IZUNAGA Information Systems Research Division, The Institute of Behavioral Sciences. 2-9 Ichigayahonmura-cho, Shinjyuku-ku, Tokyo 162-0845, Japan Abstract. For clustering of an undirected graph, this paper presents an exact algorithm for the maximization of modularity density, a more complicated criterion to overcome drawbacks of the well-known modularity. The problem can be interpreted as the set-partitioning problem, which reminds us of its integer linear programming (ILP) formulation. We provide a branch-and-price framework for solving this ILP, or column generation combined with branch-and-bound. Above all, we formulate the column gen- eration subproblem to be solved repeatedly as a simpler mixed integer linear programming (MILP) problem. Acceleration tech- niques called the set-packing relaxation and the multiple-cutting- planes-at-a-time combined with the MILP formulation enable us to optimize the modularity density for famous test instances in- cluding ones with over 100 vertices in around four minutes by a PC. Our solution method is deterministic and the computation time is not affected by any stochastic behavior. For one of them, column generation at the root node of the branch-and-bound tree arXiv:1705.02961v3 [cs.SI] 27 Jun 2017 provides a fractional upper bound solution and our algorithm finds an integral optimal solution after branching. E-mail addresses: (Keisuke Sato) [email protected], (Yoichi Izunaga) [email protected].
    [Show full text]
  • Linear Programming Notes X: Integer Programming
    Linear Programming Notes X: Integer Programming 1 Introduction By now you are familiar with the standard linear programming problem. The assumption that choice variables are infinitely divisible (can be any real number) is unrealistic in many settings. When we asked how many chairs and tables should the profit-maximizing carpenter make, it did not make sense to come up with an answer like “three and one half chairs.” Maybe the carpenter is talented enough to make half a chair (using half the resources needed to make the entire chair), but probably she wouldn’t be able to sell half a chair for half the price of a whole chair. So, sometimes it makes sense to add to a problem the additional constraint that some (or all) of the variables must take on integer values. This leads to the basic formulation. Given c = (c1, . , cn), b = (b1, . , bm), A a matrix with m rows and n columns (and entry aij in row i and column j), and I a subset of {1, . , n}, find x = (x1, . , xn) max c · x subject to Ax ≤ b, x ≥ 0, xj is an integer whenever j ∈ I. (1) What is new? The set I and the constraint that xj is an integer when j ∈ I. Everything else is like a standard linear programming problem. I is the set of components of x that must take on integer values. If I is empty, then the integer programming problem is a linear programming problem. If I is not empty but does not include all of {1, . , n}, then sometimes the problem is called a mixed integer programming problem.
    [Show full text]
  • Integer Linear Programming
    Introduction Linear Programming Integer Programming Integer Linear Programming Subhas C. Nandy ([email protected]) Advanced Computing and Microelectronics Unit Indian Statistical Institute Kolkata 700108, India. Introduction Linear Programming Integer Programming Organization 1 Introduction 2 Linear Programming 3 Integer Programming Introduction Linear Programming Integer Programming Linear Programming A technique for optimizing a linear objective function, subject to a set of linear equality and linear inequality constraints. Mathematically, maximize c1x1 + c2x2 + ::: + cnxn Subject to: a11x1 + a12x2 + ::: + a1nxn b1 ≤ a21x1 + a22x2 + ::: + a2nxn b2 : ≤ : am1x1 + am2x2 + ::: + amnxn bm ≤ xi 0 for all i = 1; 2;:::; n. ≥ Introduction Linear Programming Integer Programming Linear Programming In matrix notation, maximize C T X Subject to: AX B X ≤0 where C is a n≥ 1 vector |- cost vector, A is a m× n matrix |- coefficient matrix, B is a m × 1 vector |- requirement vector, and X is an n× 1 vector of unknowns. × He developed it during World War II as a way to plan expenditures and returns so as to reduce costs to the army and increase losses incurred by the enemy. The method was kept secret until 1947 when George B. Dantzig published the simplex method and John von Neumann developed the theory of duality as a linear optimization solution. Dantzig's original example was to find the best assignment of 70 people to 70 jobs subject to constraints. The computing power required to test all the permutations to select the best assignment is vast. However, the theory behind linear programming drastically reduces the number of feasible solutions that must be checked for optimality.
    [Show full text]
  • IEOR 269, Spring 2010 Integer Programming and Combinatorial Optimization
    IEOR 269, Spring 2010 Integer Programming and Combinatorial Optimization Professor Dorit S. Hochbaum Contents 1 Introduction 1 2 Formulation of some ILP 2 2.1 0-1 knapsack problem . 2 2.2 Assignment problem . 2 3 Non-linear Objective functions 4 3.1 Production problem with set-up costs . 4 3.2 Piecewise linear cost function . 5 3.3 Piecewise linear convex cost function . 6 3.4 Disjunctive constraints . 7 4 Some famous combinatorial problems 7 4.1 Max clique problem . 7 4.2 SAT (satisfiability) . 7 4.3 Vertex cover problem . 7 5 General optimization 8 6 Neighborhood 8 6.1 Exact neighborhood . 8 7 Complexity of algorithms 9 7.1 Finding the maximum element . 9 7.2 0-1 knapsack . 9 7.3 Linear systems . 10 7.4 Linear Programming . 11 8 Some interesting IP formulations 12 8.1 The fixed cost plant location problem . 12 8.2 Minimum/maximum spanning tree (MST) . 12 9 The Minimum Spanning Tree (MST) Problem 13 i IEOR269 notes, Prof. Hochbaum, 2010 ii 10 General Matching Problem 14 10.1 Maximum Matching Problem in Bipartite Graphs . 14 10.2 Maximum Matching Problem in Non-Bipartite Graphs . 15 10.3 Constraint Matrix Analysis for Matching Problems . 16 11 Traveling Salesperson Problem (TSP) 17 11.1 IP Formulation for TSP . 17 12 Discussion of LP-Formulation for MST 18 13 Branch-and-Bound 20 13.1 The Branch-and-Bound technique . 20 13.2 Other Branch-and-Bound techniques . 22 14 Basic graph definitions 23 15 Complexity analysis 24 15.1 Measuring quality of an algorithm .
    [Show full text]
  • A Branch-And-Bound Algorithm for Zero-One Mixed Integer
    A BRANCH-AND-BOUND ALGORITHM FOR ZERO- ONE MIXED INTEGER PROGRAMMING PROBLEMS Ronald E. Davis Stanford University, Stanford, California David A. Kendrick University of Texas, Austin, Texas and Martin Weitzman Yale University, New Haven, Connecticut (Received August 7, 1969) This paper presents the results of experimentation on the development of an efficient branch-and-bound algorithm for the solution of zero-one linear mixed integer programming problems. An implicit enumeration is em- ployed using bounds that are obtained from the fractional variables in the associated linear programming problem. The principal mathematical result used in obtaining these bounds is the piecewise linear convexity of the criterion function with respect to changes of a single variable in the interval [0, 11. A comparison with the computational experience obtained with several other algorithms on a number of problems is included. MANY IMPORTANT practical problems of optimization in manage- ment, economics, and engineering can be posed as so-called 'zero- one mixed integer problems,' i.e., as linear programming problems in which a subset of the variables is constrained to take on only the values zero or one. When indivisibilities, economies of scale, or combinatoric constraints are present, formulation in the mixed-integer mode seems natural. Such problems arise frequently in the contexts of industrial scheduling, investment planning, and regional location, but they are by no means limited to these areas. Unfortunately, at the present time the performance of most compre- hensive algorithms on this class of problems has been disappointing. This study was undertaken in hopes of devising a more satisfactory approach. In this effort we have drawn on the computational experience of, and the concepts employed in, the LAND AND DoIGE161 Healy,[13] and DRIEBEEKt' I algorithms.
    [Show full text]
  • Matroid Partitioning Algorithm Described in the Paper Here with Ray’S Interest in Doing Ev- Erything Possible by Using Network flow Methods
    Chapter 7 Matroid Partition Jack Edmonds Introduction by Jack Edmonds This article, “Matroid Partition”, which first appeared in the book edited by George Dantzig and Pete Veinott, is important to me for many reasons: First for per- sonal memories of my mentors, Alan J. Goldman, George Dantzig, and Al Tucker. Second, for memories of close friends, as well as mentors, Al Lehman, Ray Fulker- son, and Alan Hoffman. Third, for memories of Pete Veinott, who, many years after he invited and published the present paper, became a closest friend. And, finally, for memories of how my mixed-blessing obsession with good characterizations and good algorithms developed. Alan Goldman was my boss at the National Bureau of Standards in Washington, D.C., now the National Institutes of Science and Technology, in the suburbs. He meticulously vetted all of my math including this paper, and I would not have been a math researcher at all if he had not encouraged it when I was a university drop-out trying to support a baby and stay-at-home teenage wife. His mentor at Princeton, Al Tucker, through him of course, invited me with my child and wife to be one of the three junior participants in a 1963 Summer of Combinatorics at the Rand Corporation in California, across the road from Muscle Beach. The Bureau chiefs would not approve this so I quit my job at the Bureau so that I could attend. At the end of the summer Alan hired me back with a big raise. Dantzig was and still is the only historically towering person I have known.
    [Show full text]
  • Introduction to Approximation Algorithms
    CSC 373 - Algorithm Design, Analysis, and Complexity Summer 2016 Lalla Mouatadid Introduction to Approximation Algorithms The focus of these last two weeks of class will be on how to deal with NP-hard problems. If we don't know how to solve them, what would be the first intuitive way to tackle them? One well studied method is approximation algorithms; algorithms that run in polynomial time that can approximate the solution within a constant factor of the optimal solution. How well can we approximate these solutions? And can we approximate all NP-hard problems? Before we try to answer any of these questions, let's formally define what we mean by approximating a solution. Notice first that if we're considering how \good" a solution is, then we must be dealing with opti- mization problems, instead of decision problems. Recall that when dealing with an optimization problem, the goal is minimize or maximize some objective function. For instance, maximizing the size of an indepen- dent set or minimizing the size of a vertex cover. Since these problems are NP-hard, we focus on polynomial time algorithms that give us an approximate solution. More formally: Let P be an optimization problem.For every input x of P , let OP T (x) denote the optimal value of the solution for x. Let A be an algorithm and c a constant such that c ≥ 1. 1. Suppose P is a minimization problem. We say that A is a c-approximation for P if for every input x of P : OP T (x) ≤ A(x) ≤ c · OP T (x) 2.
    [Show full text]
  • Generic Branch-Cut-And-Price
    Generic Branch-Cut-and-Price Diplomarbeit bei PD Dr. M. L¨ubbecke vorgelegt von Gerald Gamrath 1 Fachbereich Mathematik der Technischen Universit¨atBerlin Berlin, 16. M¨arz2010 1Konrad-Zuse-Zentrum f¨urInformationstechnik Berlin, [email protected] 2 Contents Acknowledgments iii 1 Introduction 1 1.1 Definitions . .3 1.2 A Brief History of Branch-and-Price . .6 2 Dantzig-Wolfe Decomposition for MIPs 9 2.1 The Convexification Approach . 11 2.2 The Discretization Approach . 13 2.3 Quality of the Relaxation . 21 3 Extending SCIP to a Generic Branch-Cut-and-Price Solver 25 3.1 SCIP|a MIP Solver . 25 3.2 GCG|a Generic Branch-Cut-and-Price Solver . 27 3.3 Computational Environment . 35 4 Solving the Master Problem 39 4.1 Basics in Column Generation . 39 4.1.1 Reduced Cost Pricing . 42 4.1.2 Farkas Pricing . 43 4.1.3 Finiteness and Correctness . 44 4.2 Solving the Dantzig-Wolfe Master Problem . 45 4.3 Implementation Details . 48 4.3.1 Farkas Pricing . 49 4.3.2 Reduced Cost Pricing . 52 4.3.3 Making Use of Bounds . 54 4.4 Computational Results . 58 4.4.1 Farkas Pricing . 59 4.4.2 Reduced Cost Pricing . 65 5 Branching 71 5.1 Branching on Original Variables . 73 5.2 Branching on Variables of the Extended Problem . 77 5.3 Branching on Aggregated Variables . 78 5.4 Ryan and Foster Branching . 79 i ii Contents 5.5 Other Branching Rules . 82 5.6 Implementation Details . 85 5.6.1 Branching on Original Variables . 87 5.6.2 Ryan and Foster Branching .
    [Show full text]
  • Lecture 11: Integer Programming and Branch and Bound 1.12.2010 Lecturer: Prof
    Optimization Methods in Finance (EPFL, Fall 2010) Lecture 11: Integer programming and Branch and Bound 1.12.2010 Lecturer: Prof. Friedrich Eisenbrand Scribe: Boris Oumow 1 Integer Programming Definition 1. An integer program (IP) is a problem of the form: maxcT x s.t. Ax b x 2 Zn If we drop the last constraint x 2 Zn, the linear program obtained is called the LP-relaxation of (IP). Example 2 (Combinatorial auctions). A combinatorial auction is a type of smart market in which participants can place bids on combinations of discrete items, or “packages”, rather than just in- dividual items or continuous quantities. In many auctions, the value that a bidder has for a set of items may not be the sum of the values that he has for individual items. It may be more or it may be less. In other words, let M = f1;:::;mg be the set of items that the auctioneer has to sell. A bid is a pair B j = (S j; p j) where S j M is a nonempty subset of the items and p j is the price for this set. We suppose that the auctioneer has received n items B j, j = 1;:::;n. Question: How should the auctioneer determine the winners and the loosers of bidding in order to maximize his revenue? In lecture 10, we have seen a numerical example of this problem. Let’s see now its generaliza- tion. The (IP) for this problem is: n max∑ j=1 p jx j s.t. Ax 1 0 x 1 x 2 Zn where A 2 f0;1gm¢n is the matrix defined by ( 1 if i 2 S j Ai j = 0 otherwise 1 L.A Pittsburgh Boston Houston West 60 120 180 180 Midwest 48 24 60 60 East 216 180 72 180 South 126 90 108 54 Table 1: Lost interest in thousand of $ per year.
    [Show full text]
  • An Efficient Solution Methodology for Mixed-Integer Programming Problems Arising in Power Systems" (2016)
    University of Connecticut OpenCommons@UConn Doctoral Dissertations University of Connecticut Graduate School 12-15-2016 An Efficient Solution Methodology for Mixed- Integer Programming Problems Arising in Power Systems Mikhail Bragin University of Connecticut - Storrs, [email protected] Follow this and additional works at: https://opencommons.uconn.edu/dissertations Recommended Citation Bragin, Mikhail, "An Efficient Solution Methodology for Mixed-Integer Programming Problems Arising in Power Systems" (2016). Doctoral Dissertations. 1318. https://opencommons.uconn.edu/dissertations/1318 An Efficient Solution Methodology for Mixed-Integer Programming Problems Arising in Power Systems Mikhail Bragin, PhD University of Connecticut, 2016 For many important mixed-integer programming (MIP) problems, the goal is to obtain near- optimal solutions with quantifiable quality in a computationally efficient manner (within, e.g., 5, 10 or 20 minutes). A traditional method to solve such problems has been Lagrangian relaxation, but the method suffers from zigzagging of multipliers and slow convergence. When solving mixed-integer linear programming (MILP) problems, the recently adopted branch-and-cut may also suffer from slow convergence because when the convex hull of the problems has complicated facial structures, facet- defining cuts are typically difficult to obtain, and the method relies mostly on time-consuming branching operations. In this thesis, the novel Surrogate Lagrangian Relaxation method is developed and its convergence is proved to the optimal multipliers, without the knowledge of the optimal dual value and without fully optimizing the relaxed problem. Moreover, for practical implementations a stepsizing formula, that guarantees convergence without requiring the optimal dual value, has been constructively developed. The key idea is to select stepsizes in a way that distances between Lagrange multipliers at consecutive iterations decrease, and as a result, Lagrange multipliers converge to a unique limit.
    [Show full text]
  • Matroid Intersection and Its Application to a Multiple Depot, Multiple TSP
    Institute of Transportation Studies University of California at Berkeley Matroid Intersection and its application to a Multiple Depot, Multiple TSP Sivakumar Rathinam and Raja Sengupta RESEARCH REPORT UCB-ITS-RR-2006-3 May 2006 ISSN 0192 4095 1 Matroid Intersection and its application to a Multiple Depot, Multiple TSP Sivakumar Rathinam1, Raja Sengupta2 Abstract This paper extends the Held-Karp’s lower bound available for a single Travelling Salesman Problem to the following symmetric Multiple Depot, Multiple Travelling Salesman Problem (MDMTSP): Given k salesman that start at different depots, k terminals and n destinations, the problem is to choose paths for each of the salesmen so that (1) each vehicle starts at its respective depot, visits atleast one destination and reaches any one of the terminals not visited by other vehicles, (2) each destination is visited by exactly one vehicle and (3) the cost of the paths is a minimum among all possible paths for the salesmen. The criteria for the cost of paths considered is the total cost of the edges travelled by the entire collection. This MDMTSP is formulated as a minimum cost constrained forest problem subject to side constraints. By perturbing the costs of each edge from Cij to Cij := Cij +πi +πj, one obtains an infinite family of lower bounds, denoted by w(π), for the MDMTSP. Each lower bound, w(π), involves calculating a minimum cost constrained forest. The minimum cost constrained forest problem is posed as a weighted two matroid intersection problem and hence can be solved in polynomial time. Since the polyhedra corresponding to the matroid intersection problem has the integrality property, it follows that the optimal cost corresponding to the LP relaxation of the MDMTSP is actually equal to maxπ w(π).
    [Show full text]
  • Matrices of Optimal Tree-Depth and Row-Invariant Parameterized Algorithm for Integer Programming Timothy F
    Matrices of Optimal Tree-Depth and Row-Invariant Parameterized Algorithm for Integer Programming Timothy F. N. Chan School of Mathematical Sciences, Monash University, Melbourne, Australia Mathematics Institute and DIMAP, University of Warwick, Coventry, UK [email protected] Jacob W. Cooper Faculty of Informatics, Masaryk University, Brno, Czech Republic [email protected] Martin Koutecký Computer Science Institute, Charles University, Prague, Czech Republic [email protected]ff.cuni.cz Daniel Král’ Faculty of Informatics, Masaryk University, Brno, Czech Republic Mathematics Institute, DIMAP and Department of Computer Science, University of Warwick, Coventry, UK dkral@fi.muni.cz Kristýna Pekárková Faculty of Informatics, Masaryk University, Brno, Czech Republic [email protected] Abstract A long line of research on fixed parameter tractability of integer programming culminated with showing that integer programs with n variables and a constraint matrix with tree-depth d and largest entry ∆ are solvable in time g(d, ∆)poly(n) for some function g, i.e., fixed parameter tractable when parameterized by tree-depth d and ∆. However, the tree-depth of a constraint matrix depends on the positions of its non-zero entries and thus does not reflect its geometric structure. In particular, tree-depth of a constraint matrix is not preserved by row operations, i.e., a given integer program can be equivalent to another with a smaller dual tree-depth. We prove that the branch-depth of the matroid defined by the columns of the constraint matrix is equal to the minimum tree-depth of a row-equivalent matrix. We also design a fixed parameter algorithm parameterized by an integer d and the entry complexity of an input matrix that either outputs a matrix with the smallest dual tree-depth that is row-equivalent to the input matrix or outputs that there is no matrix with dual tree-depth at most d that is row-equivalent to the input matrix.
    [Show full text]