Depth-First Branch-And-Bound Versus Local Search: a Case Study

Total Page:16

File Type:pdf, Size:1020Kb

Depth-First Branch-And-Bound Versus Local Search: a Case Study From: AAAI-00 Proceedings. Copyright © 2000, AAAI (www.aaai.org). All rights reserved. Depth-First Branch-and-Bound versus Local Search: A Case Study Weixiong Zhang Information Sciences Institute and Computer Science Department University of Southern California 4676 Admiralty Way, Marina del Rey, CA 90292 Email: [email protected] Abstract 1983). DFBnB explores a state space in a depth-first order, and finds many suboptimal solutions with in- Depth-first branch-and-bound (DFBnB) is a complete creasingly better qualities. These solutions are approx- algorithm that is typically used to find optimal solu- imations to the optimal solution if DFBnB is termi- tions of difficult combinatorial optimization problems. It can also be adapted to an approximation algorithm nated prematurely. Furthermore, DFBnB is also an and run as an anytime algorithm, which are the sub- anytime algorithm. An anytime algorithm (Dean & jects of this paper. We compare DFBnB against the Boddy 1988) can provide a solution at any time dur- Kanellakis-Papadimitriou local search algorithm, the ing its execution, and is able to improve the qual- best known approximation algorithm, on the asymmet- ity of the current best solution with more computa- ric Traveling Salesman Problem (ATSP), an important tion. DFBnB finds better suboptimal solutions with NP-hard problem. Our experimental results show that more computation and eventually reaches the opti- DFBnB significantly outperforms the local search on mal solution. In contrast to the importance of any- large ATSP and various ATSP structures, finding bet- time problem solving and the effort of developing ter solutions faster than the local search; and the qual- new anytime algorithms (Hoebel & Zilberstein 1997; ity of approximate solutions from a prematurely ter- minated DFBnB, called truncated DFBnB, is several Horvitz & Zilberstein 1996), DFBnB has not been stud- times better than that from the local search. ied as an approximation or anytime algorithm so far. We study DFBnB as an approximation and any- time algorithm in this paper. We compare it Introduction and Overview against the Kanellakis-Papadimitriou local search algo- Depth-first branch-and-bound (DFBnB) (Balas & Toth rithm on the asymmetric Traveling Salesman Problem 1985; Papadimitriou & Steiglitz 1982) and local (ATSP) (Kanellakis & Papadimitriou 1980). This local search (Lin & Kernighan 1973; Kanellakis & Papadim- search algorithm is an adaptation and extension of the itriou 1980; Johnson 1990; Johnson & McGeoch 1997) well-known Lin-Kernighan local search algorithm (Lin are the two most applied search methods for solving & Kernighan 1973), and the only local search algo- combinatorial optimization problems, such as planning rithm for the ATSP which we found in the literature. and scheduling. DFBnB is usually the algorithm for We choose the ATSP due to the following two reasons. finding optimal solutions of large problems, due to the First, the ATSP is an important problem in the NP- virtue of its linear-space requirement. Local search, hard class (Papadimitriou & Steiglitz 1982) and has on the other hand, is a method for high-quality ap- many practical applications. Many difficult combina- proximate solutions, and has been shown to be effec- torial optimization problems, such as vehicle routing, tive and efficient on many combinatorial optimization workshop scheduling and computer wiring, can be for- problems, such as the symmetric Traveling Salesman mulated and solved as the ATSP (Lawler et al. 1985). Problem (TSP) (Lin & Kernighan 1973; Johnson 1990; Second, despite its importance, little work has been Johnson & McGeoch 1997). done on the ATSP, which is disproportional to that on Besides that DFBnB is an efficient complete algo- the symmetric TSP (see (Johnson & McGeoch 1997) rithm for optimal solutions, it can be used as an ap- for an excellent survey and references cited). It will be proximation algorithm1, as suggested in (Ibaraki et al. very helpful if information regarding which algorithm should be used for a particular type of ATSP is avail- Copyright c 2000, American Association for Artificial In- able to guide algorithm selection in practice. telligence (www.aaai.org). All rights reserved. 1We use the term approximation algorithm loosely to re- The paper is structured as follows. We discuss the fer to an algorithm that is able to find a suboptimal solution. ATSP, DFBnB, truncated DFBnB and the Kanellakis- Such an algorithm does not provide a quality guarantee un- Papadimitriou local search algorithm in Section 2. In der the definition of -approximation algorithm (Papadim- Section 3, we describe various ATSP structures used itriou 1994). in our experiments. In Section 4, we investigate initial tour-construction heuristics. In Section 5, we compare selects a recently generated node n to examine next. If DFBnB with the local search algorithm. We discuss the AP solution of n is a complete tour, meaning that n the features of DFBnB and the weakness of the local is a leaf node in the search tree, and its cost is less than search algorithm on the ATSP in Section 6. Finally, we the current upper bound α, α is revised to the cost of conclude and discuss future work in Section 7. n.Ifn’s AP solution is not a complete tour and its cost is greater than or equal to α, n is pruned, because node The Problem and the algorithms costs are non-decreasing along a path from the root so that no descendent of n will have a cost smaller than { ··· } Given n cities, 1, 2, ,n , and a matrix (ci,j) that de- n’s cost. Otherwise, n is expanded, generating all its fines the costs of pairs of cities, the Traveling Salesman child nodes. To find an optimal goal node quickly, the Problem (TSP) is to find a minimum-cost tour that vis- children n should be searched in an increasing order of its each city once and returns to the starting city. When their costs. This is called node ordering. We use node the cost matrix is asymmetric, i.e., the cost from city i ordering in this study. to city j is not necessarily equal to the cost from j to DFBnB can be used as an anytime algorithm. Dur- i, the problem is the asymmetric TSP (ATSP). ing the depth-first exploration of a state space, DF- BnB may encounter many leaf nodes and improve the DFBnB and truncated DFBnB best solution at hand continuously. Furthermore, DF- Branch-and-bound (BnB) (Balas & Toth 1985; Pa- BnB can be stopped at any time during its execution. padimitriou & Steiglitz 1982) solves an ATSP as a This is an extension to the M-Cut strategy suggested state-space search and uses the assignment problem as in (Ibaraki et al. 1983), which terminates BnB with a a lower-bound cost function. The assignment problem fixed amount of computation. We call DFBnB with an (AP) (Martello & Toth 1987; Papadimitriou & Stei- early termination truncated DFBnB. Among all possi- glitz 1982) is to assign to each city i another city j, ble stopping points, of particular interest is where the with ci,j as the cost of the assignment, such that the first leaf node is reached. When no initial tour is used, total cost of all assignments is minimized. The AP is this simple, special truncated DFBnB is a greedy search a relaxation of the ATSP, since the assignments need in a search space, which always chooses to explore next not form a complete tour. Therefore, the AP cost is the minimum-cost child node of the current state until a lower bound on the ATSP tour cost. If the AP so- it reaches a leaf node. When a high-quality initial tour lution happens to be a complete tour, it is also the is employed, DFBnB may not necessarily encounter a solution to the ATSP. The BnB search takes the origi- leaf node before the total allocated computation is ex- nal ATSP as the root of the state space and repeats the hausted. Without confusion, we call DFBnB that ter- following two steps. First, solve the AP of the current minates when it reaches a leaf node or consumes all problem. If the AP solution is not a complete tour, allowed computation truncated DFBnB in the rest of decompose it into subproblems by subtour elimination. this paper. Specifically, select a subtour from the AP solution, and generate subproblems by excluding some edges from Kanellakis-Papadimitriou local search the assignments, so as to eliminate the subtour. There Local search is based on a fundamental concept called are many subtour-elimination heuristics (Balas & Toth neighborhood structure. If two TSP tours differ by λ 1985), and we use the Carpaneto-Toth scheme in our ex- edges, and one can be changed to the other by swap- periments (Carpaneto & Toth 1980), which generates ping the different edges, one tour is a λ-change neighbor no duplicate subproblem. Next, select as the current of the other. A neighborhood structure is established problem a new subproblem that has been generated but by defining the legitimate changes. Within a neighbor- not yet expanded. This process continues until there is hood, a tour is a local optimum if it is the best among no unexpanded problem, or until all unexpanded prob- its neighbors. Given a neighborhood structure, a local lems have costs greater than or equal to the cost of the search moves from a tour to a neighboring tour that has best complete tour found so far. Note that a subprob- a smaller cost until a local optimum is reached. lem is more constrained than its parent problem, there- The Kanellakis-Papadimitriou local search algorithm fore the AP cost to the subproblem must be as much as for the ATSP (Kanellakis & Papadimitriou 1980) fol- that to the parent. This means that the AP cost func- lows the Lin-Kernighan local search algorithm for the tion is monotonically nondecreasing with search depth.
Recommended publications
  • Adversarial Search
    Adversarial Search In which we examine the problems that arise when we try to plan ahead in a world where other agents are planning against us. Outline 1. Games 2. Optimal Decisions in Games 3. Alpha-Beta Pruning 4. Imperfect, Real-Time Decisions 5. Games that include an Element of Chance 6. State-of-the-Art Game Programs 7. Summary 2 Search Strategies for Games • Difference to general search problems deterministic random – Imperfect Information: opponent not deterministic perfect Checkers, Backgammon, – Time: approximate algorithms information Chess, Go Monopoly incomplete Bridge, Poker, ? information Scrabble • Early fundamental results – Algorithm for perfect game von Neumann (1944) • Our terminology: – Approximation through – deterministic, fully accessible evaluation information Zuse (1945), Shannon (1950) Games 3 Games as Search Problems • Justification: Games are • Games as playground for search problems with an serious research opponent • How can we determine the • Imperfection through actions best next step/action? of opponent: possible results... – Cutting branches („pruning“) • Games hard to solve; – Evaluation functions for exhaustive: approximation of utility – Average branching factor function chess: 35 – ≈ 50 steps per player ➞ 10154 nodes in search tree – But “Only” 1040 allowed positions Games 4 Search Problem • 2-player games • Search problem – Player MAX – Initial state – Player MIN • Board, positions, first player – MAX moves first; players – Successor function then take turns • Lists of (move,state)-pairs – Goal test
    [Show full text]
  • Chapter 9. Coping with NP-Completeness
    Chapter 9 Coping with NP-completeness You are the junior member of a seasoned project team. Your current task is to write code for solving a simple-looking problem involving graphs and numbers. What are you supposed to do? If you are very lucky, your problem will be among the half-dozen problems concerning graphs with weights (shortest path, minimum spanning tree, maximum flow, etc.), that we have solved in this book. Even if this is the case, recognizing such a problem in its natural habitat—grungy and obscured by reality and context—requires practice and skill. It is more likely that you will need to reduce your problem to one of these lucky ones—or to solve it using dynamic programming or linear programming. But chances are that nothing like this will happen. The world of search problems is a bleak landscape. There are a few spots of light—brilliant algorithmic ideas—each illuminating a small area around it (the problems that reduce to it; two of these areas, linear and dynamic programming, are in fact decently large). But the remaining vast expanse is pitch dark: NP- complete. What are you to do? You can start by proving that your problem is actually NP-complete. Often a proof by generalization (recall the discussion on page 270 and Exercise 8.10) is all that you need; and sometimes a simple reduction from 3SAT or ZOE is not too difficult to find. This sounds like a theoretical exercise, but, if carried out successfully, it does bring some tangible rewards: now your status in the team has been elevated, you are no longer the kid who can't do, and you have become the noble knight with the impossible quest.
    [Show full text]
  • CS 561, Lecture 24 Outline All-Pairs Shortest Paths Example
    Outline CS 561, Lecture 24 • All Pairs Shortest Paths Jared Saia • TSP Approximation Algorithm University of New Mexico 1 All-Pairs Shortest Paths Example • For the single-source shortest paths problem, we wanted to find the shortest path from a source vertex s to all the other vertices in the graph • We will now generalize this problem further to that of finding • For any vertex v, we have dist(v, v) = 0 and pred(v, v) = the shortest path from every possible source to every possible NULL destination • If the shortest path from u to v is only one edge long, then • In particular, for every pair of vertices u and v, we need to dist(u, v) = w(u → v) and pred(u, v) = u compute the following information: • If there’s no shortest path from u to v, then dist(u, v) = ∞ – dist(u, v) is the length of the shortest path (if any) from and pred(u, v) = NULL u to v – pred(u, v) is the second-to-last vertex (if any) on the short- est path (if any) from u to v 2 3 APSP Lots of Single Sources • The output of our shortest path algorithm will be a pair of |V | × |V | arrays encoding all |V |2 distances and predecessors. • Many maps contain such a distance matric - to find the • Most obvious solution to APSP is to just run SSSP algorithm distance from (say) Albuquerque to (say) Ruidoso, you look |V | times, once for every possible source vertex in the row labeled “Albuquerque” and the column labeled • Specifically, to fill in the subarray dist(s, ∗), we invoke either “Ruidoso” Dijkstra’s or Bellman-Ford starting at the source vertex s • In this class, we’ll focus
    [Show full text]
  • Artificial Intelligence Spring 2019 Homework 2: Adversarial Search
    Artificial Intelligence Spring 2019 Homework 2: Adversarial Search PROGRAMMING In this assignment, you will create an adversarial search agent to play the 2048-puzzle game. A demo of ​ ​ the game is available here: gabrielecirulli.github.io/2048. ​ ​ I. 2048 As A Two-Player Game ​ II. Choosing a Search Algorithm: Expectiminimax ​ III. Using The Skeleton Code ​ IV. What You Need To Submit ​ V. Important Information ​ VI. Before You Submit ​ I. 2048 As A Two-Player Game 2048 is played on a 4×4 grid with numbered tiles which can slide up, down, left, or right. This game can ​ ​ be modeled as a two player game, in which the computer AI generates a 2- or 4-tile placed randomly on the board, and the player then selects a direction to move the tiles. Note that the tiles move until they either (1) collide with another tile, or (2) collide with the edge of the grid. If two tiles of the same number collide in a move, they merge into a single tile valued at the sum of the two originals. The resulting tile cannot merge with another tile again in the same move. Usually, each role in a two-player games has a similar set of moves to choose from, and similar objectives (e.g. chess). In 2048 however, the player roles are inherently asymmetric, as the Computer AI ​ ​ places tiles and the Player moves them. Adversarial search can still be applied! Using your previous experience with objects, states, nodes, functions, and implicit or explicit search trees, along with our skeleton code, focus on optimizing your player algorithm to solve 2048 as efficiently and consistently ​ ​ as possible.
    [Show full text]
  • Approximability Preserving Reduction Giorgio Ausiello, Vangelis Paschos
    Approximability preserving reduction Giorgio Ausiello, Vangelis Paschos To cite this version: Giorgio Ausiello, Vangelis Paschos. Approximability preserving reduction. 2005. hal-00958028 HAL Id: hal-00958028 https://hal.archives-ouvertes.fr/hal-00958028 Preprint submitted on 11 Mar 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Laboratoire d'Analyse et Modélisation de Systèmes pour l'Aide à la Décision CNRS UMR 7024 CAHIER DU LAMSADE 227 Septembre 2005 Approximability preserving reductions Giorgio AUSIELLO, Vangelis Th. PASCHOS Approximability preserving reductions∗ Giorgio Ausiello1 Vangelis Th. Paschos2 1 Dipartimento di Informatica e Sistemistica Università degli Studi di Roma “La Sapienza” [email protected] 2 LAMSADE, CNRS UMR 7024 and Université Paris-Dauphine [email protected] September 15, 2005 Abstract We present in this paper a “tour d’horizon” of the most important approximation-pre- serving reductions that have strongly influenced research about structure in approximability classes. 1 Introduction The technique of transforming a problem into another in such a way that the solution of the latter entails, somehow, the solution of the former, is a classical mathematical technique that has found wide application in computer science since the seminal works of Cook [10] and Karp [19] who introduced particular kinds of transformations (called reductions) with the aim of study- ing the computational complexity of combinatorial decision problems.
    [Show full text]
  • Admm for Multiaffine Constrained Optimization Wenbo Gao†, Donald
    ADMM FOR MULTIAFFINE CONSTRAINED OPTIMIZATION WENBO GAOy, DONALD GOLDFARBy, AND FRANK E. CURTISz Abstract. We propose an expansion of the scope of the alternating direction method of multipliers (ADMM). Specifically, we show that ADMM, when employed to solve problems with multiaffine constraints that satisfy certain easily verifiable assumptions, converges to the set of constrained stationary points if the penalty parameter in the augmented Lagrangian is sufficiently large. When the Kurdyka-Lojasiewicz (K-L)property holds, this is strengthened to convergence to a single con- strained stationary point. Our analysis applies under assumptions that we have endeavored to make as weak as possible. It applies to problems that involve nonconvex and/or nonsmooth objective terms, in addition to the multiaffine constraints that can involve multiple (three or more) blocks of variables. To illustrate the applicability of our results, we describe examples including nonnega- tive matrix factorization, sparse learning, risk parity portfolio selection, nonconvex formulations of convex problems, and neural network training. In each case, our ADMM approach encounters only subproblems that have closed-form solutions. 1. Introduction The alternating direction method of multipliers (ADMM) is an iterative method that was initially proposed for solving linearly-constrained separable optimization problems having the form: ( inf f(x) + g(y) (P 0) x;y Ax + By − b = 0: The augmented Lagrangian L of the problem (P 0), for some penalty parameter ρ > 0, is defined to be ρ L(x; y; w) = f(x) + g(y) + hw; Ax + By − bi + kAx + By − bk2: 2 In iteration k, with the iterate (xk; yk; wk), ADMM takes the following steps: (1) Minimize L(x; yk; wk) with respect to x to obtain xk+1.
    [Show full text]
  • Approximation Algorithms
    Lecture 21 Approximation Algorithms 21.1 Overview Suppose we are given an NP-complete problem to solve. Even though (assuming P = NP) we 6 can’t hope for a polynomial-time algorithm that always gets the best solution, can we develop polynomial-time algorithms that always produce a “pretty good” solution? In this lecture we consider such approximation algorithms, for several important problems. Specific topics in this lecture include: 2-approximation for vertex cover via greedy matchings. • 2-approximation for vertex cover via LP rounding. • Greedy O(log n) approximation for set-cover. • Approximation algorithms for MAX-SAT. • 21.2 Introduction Suppose we are given a problem for which (perhaps because it is NP-complete) we can’t hope for a fast algorithm that always gets the best solution. Can we hope for a fast algorithm that guarantees to get at least a “pretty good” solution? E.g., can we guarantee to find a solution that’s within 10% of optimal? If not that, then how about within a factor of 2 of optimal? Or, anything non-trivial? As seen in the last two lectures, the class of NP-complete problems are all equivalent in the sense that a polynomial-time algorithm to solve any one of them would imply a polynomial-time algorithm to solve all of them (and, moreover, to solve any problem in NP). However, the difficulty of getting a good approximation to these problems varies quite a bit. In this lecture we will examine several important NP-complete problems and look at to what extent we can guarantee to get approximately optimal solutions, and by what algorithms.
    [Show full text]
  • Branch and Price for Chance Constrained Bin Packing
    Branch and Price for Chance Constrained Bin Packing Zheng Zhang Department of Industrial Engineering and Management, Shanghai Jiao Tong University, Shanghai 200240, China, [email protected] Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, [email protected] Brian Denton Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, [email protected] Xiaolan Xie Department of Industrial Engineering and Management, Shanghai Jiao Tong University, Shanghai 200240, China, [email protected] Center for Biomedical and Healthcare Engineering, Ecole Nationale Superi´euredes Mines, Saint Etienne 42023, France, [email protected] This article considers two versions of the stochastic bin packing problem with chance constraints. In the first version, we formulate the problem as a two-stage stochastic integer program that considers item-to- bin allocation decisions in the context of chance constraints on total item size within the bins. Next, we describe a distributionally robust formulation of the problem that assumes the item sizes are ambiguous. We present a branch-and-price approach based on a column generation reformulation that is tailored to these two model formulations. We further enhance this approach using antisymmetry branching and subproblem reformulations of the stochastic integer programming model based on conditional value at risk (CVaR) and probabilistic covers. For the distributionally robust formulation we derive a closed form expression for the chance constraints; furthermore, we show that under mild assumptions the robust model can be reformulated as a mixed integer program with significantly fewer integer variables compared to the stochastic integer program. We implement a series of numerical experiments based on real data in the context of an application to surgery scheduling in hospitals to demonstrate the performance of the methods for practical applications.
    [Show full text]
  • Best-First and Depth-First Minimax Search in Practice
    Best-First and Depth-First Minimax Search in Practice Aske Plaat, Erasmus University, [email protected] Jonathan Schaeffer, University of Alberta, [email protected] Wim Pijls, Erasmus University, [email protected] Arie de Bruin, Erasmus University, [email protected] Erasmus University, University of Alberta, Department of Computer Science, Department of Computing Science, Room H4-31, P.O. Box 1738, 615 General Services Building, 3000 DR Rotterdam, Edmonton, Alberta, The Netherlands Canada T6G 2H1 Abstract Most practitioners use a variant of the Alpha-Beta algorithm, a simple depth-®rst pro- cedure, for searching minimax trees. SSS*, with its best-®rst search strategy, reportedly offers the potential for more ef®cient search. However, the complex formulation of the al- gorithm and its alleged excessive memory requirements preclude its use in practice. For two decades, the search ef®ciency of ªsmartº best-®rst SSS* has cast doubt on the effectiveness of ªdumbº depth-®rst Alpha-Beta. This paper presents a simple framework for calling Alpha-Beta that allows us to create a variety of algorithms, including SSS* and DUAL*. In effect, we formulate a best-®rst algorithm using depth-®rst search. Expressed in this framework SSS* is just a special case of Alpha-Beta, solving all of the perceived drawbacks of the algorithm. In practice, Alpha-Beta variants typically evaluate less nodes than SSS*. A new instance of this framework, MTD(ƒ), out-performs SSS* and NegaScout, the Alpha-Beta variant of choice by practitioners. 1 Introduction Game playing is one of the classic problems of arti®cial intelligence.
    [Show full text]
  • Distributed Optimization Algorithms for Networked Systems
    Distributed Optimization Algorithms for Networked Systems by Nikolaos Chatzipanagiotis Department of Mechanical Engineering & Materials Science Duke University Date: Approved: Michael M. Zavlanos, Supervisor Wilkins Aquino Henri Gavin Alejandro Ribeiro Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Mechanical Engineering & Materials Science in the Graduate School of Duke University 2015 Abstract Distributed Optimization Algorithms for Networked Systems by Nikolaos Chatzipanagiotis Department of Mechanical Engineering & Materials Science Duke University Date: Approved: Michael M. Zavlanos, Supervisor Wilkins Aquino Henri Gavin Alejandro Ribeiro An abstract of a dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Mechanical Engineering & Materials Science in the Graduate School of Duke University 2015 Copyright c 2015 by Nikolaos Chatzipanagiotis All rights reserved except the rights granted by the Creative Commons Attribution-Noncommercial Licence Abstract Distributed optimization methods allow us to decompose certain optimization prob- lems into smaller, more manageable subproblems that are solved iteratively and in parallel. For this reason, they are widely used to solve large-scale problems arising in areas as diverse as wireless communications, optimal control, machine learning, artificial intelligence, computational biology, finance and statistics, to name a few. Moreover, distributed algorithms avoid the cost and fragility associated with cen- tralized coordination, and provide better privacy for the autonomous decision mak- ers. These are desirable properties, especially in applications involving networked robotics, communication or sensor networks, and power distribution systems. In this thesis we propose the Accelerated Distributed Augmented Lagrangians (ADAL) algorithm, a novel decomposition method for convex optimization prob- lems.
    [Show full text]
  • Approximation Algorithms for Optimization of Combinatorial Dynamical Systems
    1 Approximation Algorithms for Optimization of Combinatorial Dynamical Systems Insoon Yang, Samuel A. Burden, Ram Rajagopal, S. Shankar Sastry, and Claire J. Tomlin Abstract This paper considers an optimization problem for a dynamical system whose evolution depends on a collection of binary decision variables. We develop scalable approximation algorithms with provable suboptimality bounds to provide computationally tractable solution methods even when the dimension of the system and the number of the binary variables are large. The proposed method employs a linear approximation of the objective function such that the approximate problem is defined over the feasible space of the binary decision variables, which is a discrete set. To define such a linear approximation, we propose two different variation methods: one uses continuous relaxation of the discrete space and the other uses convex combinations of the vector field and running payoff. The approximate problem is a 0–1 linear program, which can be solved by existing polynomial-time exact or approximation algorithms, and does not require the solution of the dynamical system. Furthermore, we characterize a sufficient condition ensuring the approximate solution has a provable suboptimality bound. We show that this condition can be interpreted as the concavity of the objective function. The performance and utility of the proposed algorithms are demonstrated with the ON/OFF control problems of interdependent refrigeration systems. I. INTRODUCTION The dynamics of critical infrastructures and their system elements—for instance, electric grid infrastructure and their electric load elements—are interdependent, meaning that the state of each infrastructure or its system elements influences and is influenced by the state of the others [1].
    [Show full text]
  • General Branch and Bound, and Its Relation to A* and AO*
    ARTIFICIAL INTELLIGENCE 29 General Branch and Bound, and Its Relation to A* and AO* Dana S. Nau, Vipin Kumar and Laveen Kanal* Laboratory for Pattern Analysis, Computer Science Department, University of Maryland College Park, MD 20742, U.S.A. Recommended by Erik Sandewall ABSTRACT Branch and Bound (B&B) is a problem-solving technique which is widely used for various problems encountered in operations research and combinatorial mathematics. Various heuristic search pro- cedures used in artificial intelligence (AI) are considered to be related to B&B procedures. However, in the absence of any generally accepted terminology for B&B procedures, there have been widely differing opinions regarding the relationships between these procedures and B &B. This paper presents a formulation of B&B general enough to include previous formulations as special cases, and shows how two well-known AI search procedures (A* and AO*) are special cases o,f this general formulation. 1. Introduction A wide class of problems arising in operations research, decision making and artificial intelligence can be (abstractly) stated in the following form: Given a (possibly infinite) discrete set X and a real-valued objective function F whose domain is X, find an optimal element x* E X such that F(x*) = min{F(x) I x ~ X}) Unless there is enough problem-specific knowledge available to obtain the optimum element of the set in some straightforward manner, the only course available may be to enumerate some or all of the elements of X until an optimal element is found. However, the sets X and {F(x) [ x E X} are usually tThis work was supported by NSF Grant ENG-7822159 to the Laboratory for Pattern Analysis at the University of Maryland.
    [Show full text]