Parallel Technique for the Metaheuristic Algorithms Using Devoted Local Search and Manipulating the Solutions Space
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Nonlinear Optimization Using the Generalized Reduced Gradient Method Revue Française D’Automatique, Informatique, Recherche Opé- Rationnelle
REVUE FRANÇAISE D’AUTOMATIQUE, INFORMATIQUE, RECHERCHE OPÉRATIONNELLE.RECHERCHE OPÉRATIONNELLE LEON S. LASDON RICHARD L. FOX MARGERY W. RATNER Nonlinear optimization using the generalized reduced gradient method Revue française d’automatique, informatique, recherche opé- rationnelle. Recherche opérationnelle, tome 8, no V3 (1974), p. 73-103 <http://www.numdam.org/item?id=RO_1974__8_3_73_0> © AFCET, 1974, tous droits réservés. L’accès aux archives de la revue « Revue française d’automatique, in- formatique, recherche opérationnelle. Recherche opérationnelle » implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/ conditions). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fi- chier doit contenir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ R.A.LR.O. (8* année, novembre 1974, V-3, p. 73 à 104) NONLINEAR OPTIMIZATION USING THE-GENERALIZED REDUCED GRADIENT METHOD (*) by Léon S. LÀSDON, Richard L. Fox and Margery W. RATNER Abstract. — This paper describes the principles and logic o f a System of computer programs for solving nonlinear optimization problems using a Generalized Reduced Gradient Algorithm, The work is based on earlier work of Âbadie (2). Since this paper was written, many changes have been made in the logic, and significant computational expérience has been obtained* We hope to report on this in a later paper. 1. INTRODUCTION Generalized Reduced Gradient methods are algorithms for solving non- linear programs of gênerai structure. This paper discusses the basic principles of GRG, and constructs a spécifie GRG algorithm. The logic of a computer program implementing this algorithm is presented by means of flow charts and discussion. -
Metaheuristics1
METAHEURISTICS1 Kenneth Sörensen University of Antwerp, Belgium Fred Glover University of Colorado and OptTek Systems, Inc., USA 1 Definition A metaheuristic is a high-level problem-independent algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms (Sörensen and Glover, To appear). Notable examples of metaheuristics include genetic/evolutionary algorithms, tabu search, simulated annealing, and ant colony optimization, although many more exist. A problem-specific implementation of a heuristic optimization algorithm according to the guidelines expressed in a metaheuristic framework is also referred to as a metaheuristic. The term was coined by Glover (1986) and combines the Greek prefix meta- (metá, beyond in the sense of high-level) with heuristic (from the Greek heuriskein or euriskein, to search). Metaheuristic algorithms, i.e., optimization methods designed according to the strategies laid out in a metaheuristic framework, are — as the name suggests — always heuristic in nature. This fact distinguishes them from exact methods, that do come with a proof that the optimal solution will be found in a finite (although often prohibitively large) amount of time. Metaheuristics are therefore developed specifically to find a solution that is “good enough” in a computing time that is “small enough”. As a result, they are not subject to combinatorial explosion – the phenomenon where the computing time required to find the optimal solution of NP- hard problems increases as an exponential function of the problem size. Metaheuristics have been demonstrated by the scientific community to be a viable, and often superior, alternative to more traditional (exact) methods of mixed- integer optimization such as branch and bound and dynamic programming. -
Combinatorial Structures in Nonlinear Programming
Combinatorial Structures in Nonlinear Programming Stefan Scholtes¤ April 2002 Abstract Non-smoothness and non-convexity in optimization problems often arise because a combinatorial structure is imposed on smooth or convex data. The combinatorial aspect can be explicit, e.g. through the use of ”max”, ”min”, or ”if” statements in a model, or implicit as in the case of bilevel optimization where the combinatorial structure arises from the possible choices of active constraints in the lower level problem. In analyzing such problems, it is desirable to decouple the combinatorial from the nonlinear aspect and deal with them separately. This paper suggests a problem formulation which explicitly decouples the two aspects. We show that such combinatorial nonlinear programs, despite their inherent non-convexity, allow for a convex first order local optimality condition which is generic and tight. The stationarity condition can be phrased in terms of Lagrange multipliers which allows an extension of the popular sequential quadratic programming (SQP) approach to solve these problems. We show that the favorable local convergence properties of SQP are retained in this setting. The computational effectiveness of the method depends on our ability to solve the subproblems efficiently which, in turn, depends on the representation of the governing combinatorial structure. We illustrate the potential of the approach by applying it to optimization problems with max-min constraints which arise, for example, in robust optimization. 1 Introduction Nonlinear programming is nowadays regarded as a mature field. A combination of important algorithmic developments and increased computing power over the past decades have advanced the field to a stage where the majority of practical prob- lems can be solved efficiently by commercial software. -
Lecture 4 Dynamic Programming
1/17 Lecture 4 Dynamic Programming Last update: Jan 19, 2021 References: Algorithms, Jeff Erickson, Chapter 3. Algorithms, Gopal Pandurangan, Chapter 6. Dynamic Programming 2/17 Backtracking is incredible powerful in solving all kinds of hard prob- lems, but it can often be very slow; usually exponential. Example: Fibonacci numbers is defined as recurrence: 0 if n = 0 Fn =8 1 if n = 1 > Fn 1 + Fn 2 otherwise < ¡ ¡ > A direct translation in:to recursive program to compute Fibonacci number is RecFib(n): if n=0 return 0 if n=1 return 1 return RecFib(n-1) + RecFib(n-2) Fibonacci Number 3/17 The recursive program has horrible time complexity. How bad? Let's try to compute. Denote T(n) as the time complexity of computing RecFib(n). Based on the recursion, we have the recurrence: T(n) = T(n 1) + T(n 2) + 1; T(0) = T(1) = 1 ¡ ¡ Solving this recurrence, we get p5 + 1 T(n) = O(n); = 1.618 2 So the RecFib(n) program runs at exponential time complexity. RecFib Recursion Tree 4/17 Intuitively, why RecFib() runs exponentially slow. Problem: redun- dant computation! How about memorize the intermediate computa- tion result to avoid recomputation? Fib: Memoization 5/17 To optimize the performance of RecFib, we can memorize the inter- mediate Fn into some kind of cache, and look it up when we need it again. MemFib(n): if n = 0 n = 1 retujrjn n if F[n] is undefined F[n] MemFib(n-1)+MemFib(n-2) retur n F[n] How much does it improve upon RecFib()? Assuming accessing F[n] takes constant time, then at most n additions will be performed (we never recompute). -
Adversarial Search
Adversarial Search In which we examine the problems that arise when we try to plan ahead in a world where other agents are planning against us. Outline 1. Games 2. Optimal Decisions in Games 3. Alpha-Beta Pruning 4. Imperfect, Real-Time Decisions 5. Games that include an Element of Chance 6. State-of-the-Art Game Programs 7. Summary 2 Search Strategies for Games • Difference to general search problems deterministic random – Imperfect Information: opponent not deterministic perfect Checkers, Backgammon, – Time: approximate algorithms information Chess, Go Monopoly incomplete Bridge, Poker, ? information Scrabble • Early fundamental results – Algorithm for perfect game von Neumann (1944) • Our terminology: – Approximation through – deterministic, fully accessible evaluation information Zuse (1945), Shannon (1950) Games 3 Games as Search Problems • Justification: Games are • Games as playground for search problems with an serious research opponent • How can we determine the • Imperfection through actions best next step/action? of opponent: possible results... – Cutting branches („pruning“) • Games hard to solve; – Evaluation functions for exhaustive: approximation of utility – Average branching factor function chess: 35 – ≈ 50 steps per player ➞ 10154 nodes in search tree – But “Only” 1040 allowed positions Games 4 Search Problem • 2-player games • Search problem – Player MAX – Initial state – Player MIN • Board, positions, first player – MAX moves first; players – Successor function then take turns • Lists of (move,state)-pairs – Goal test -
Dynamic Programming Via Static Incrementalization 1 Introduction
Dynamic Programming via Static Incrementalization Yanhong A. Liu and Scott D. Stoller Abstract Dynamic programming is an imp ortant algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems rep eatedly and of- ten takes exp onential time, a dynamic programming algorithm solves every subsubproblem just once, saves the result, reuses it when the subsubproblem is encountered again, and takes p oly- nomial time. This pap er describ es a systematic metho d for transforming programs written as straightforward recursions into programs that use dynamic programming. The metho d extends the original program to cache all p ossibly computed values, incrementalizes the extended pro- gram with resp ect to an input increment to use and maintain all cached results, prunes out cached results that are not used in the incremental computation, and uses the resulting in- cremental program to form an optimized new program. Incrementalization statically exploits semantics of b oth control structures and data structures and maintains as invariants equalities characterizing cached results. The principle underlying incrementalization is general for achiev- ing drastic program sp eedups. Compared with previous metho ds that p erform memoization or tabulation, the metho d based on incrementalization is more powerful and systematic. It has b een implemented and applied to numerous problems and succeeded on all of them. 1 Intro duction Dynamic programming is an imp ortant technique for designing ecient algorithms [2, 44 , 13 ]. It is used for problems whose solutions involve recursively solving subproblems that overlap. -
Nonlinear Integer Programming ∗
Nonlinear Integer Programming ∗ Raymond Hemmecke, Matthias Koppe,¨ Jon Lee and Robert Weismantel Abstract. Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Nu- merous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms. We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considera- tions and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research. Raymond Hemmecke Otto-von-Guericke-Universitat¨ Magdeburg, FMA/IMO, Universitatsplatz¨ 2, 39106 Magdeburg, Germany, e-mail: [email protected] Matthias Koppe¨ University of California, Davis, Dept. of Mathematics, One Shields Avenue, Davis, CA, 95616, USA, e-mail: [email protected] Jon Lee IBM T.J. -
Artificial Intelligence Spring 2019 Homework 2: Adversarial Search
Artificial Intelligence Spring 2019 Homework 2: Adversarial Search PROGRAMMING In this assignment, you will create an adversarial search agent to play the 2048-puzzle game. A demo of the game is available here: gabrielecirulli.github.io/2048. I. 2048 As A Two-Player Game II. Choosing a Search Algorithm: Expectiminimax III. Using The Skeleton Code IV. What You Need To Submit V. Important Information VI. Before You Submit I. 2048 As A Two-Player Game 2048 is played on a 4×4 grid with numbered tiles which can slide up, down, left, or right. This game can be modeled as a two player game, in which the computer AI generates a 2- or 4-tile placed randomly on the board, and the player then selects a direction to move the tiles. Note that the tiles move until they either (1) collide with another tile, or (2) collide with the edge of the grid. If two tiles of the same number collide in a move, they merge into a single tile valued at the sum of the two originals. The resulting tile cannot merge with another tile again in the same move. Usually, each role in a two-player games has a similar set of moves to choose from, and similar objectives (e.g. chess). In 2048 however, the player roles are inherently asymmetric, as the Computer AI places tiles and the Player moves them. Adversarial search can still be applied! Using your previous experience with objects, states, nodes, functions, and implicit or explicit search trees, along with our skeleton code, focus on optimizing your player algorithm to solve 2048 as efficiently and consistently as possible. -
Automatic Code Generation Using Dynamic Programming Techniques
! Automatic Code Generation using Dynamic Programming Techniques MASTERARBEIT zur Erlangung des akademischen Grades Diplom-Ingenieur im Masterstudium INFORMATIK Eingereicht von: Igor Böhm, 0155477 Angefertigt am: Institut für System Software Betreuung: o.Univ.-Prof.Dipl.-Ing. Dr. Dr.h.c. Hanspeter Mössenböck Linz, Oktober 2007 Abstract Building compiler back ends from declarative specifications that map tree structured intermediate representations onto target machine code is the topic of this thesis. Although many tools and approaches have been devised to tackle the problem of automated code generation, there is still room for improvement. In this context we present Hburg, an implementation of a code generator generator that emits compiler back ends from concise tree pattern specifications written in our code generator description language. The language features attribute grammar style specifications and allows for great flexibility with respect to the placement of semantic actions. Our main contribution is to show that these language features can be integrated into automatically generated code generators that perform optimal instruction selection based on tree pattern matching combined with dynamic program- ming. In order to substantiate claims about the usefulness of our language we provide two complete examples that demonstrate how to specify code generators for Risc and Cisc architectures. Kurzfassung Diese Diplomarbeit beschreibt Hburg, ein Werkzeug das aus einer Spezi- fikation des abstrakten Syntaxbaums eines Programms und der Spezifika- tion der gewuns¨ chten Zielmaschine automatisch einen Codegenerator fur¨ diese Maschine erzeugt. Abbildungen zwischen abstrakten Syntaxb¨aumen und einer Zielmaschine werden durch Baummuster definiert. Fur¨ diesen Zweck haben wir eine deklarative Beschreibungssprache entwickelt, die es erm¨oglicht den Baummustern Attribute beizugeben, wodurch diese gleich- sam parametrisiert werden k¨onnen. -
Best-First and Depth-First Minimax Search in Practice
Best-First and Depth-First Minimax Search in Practice Aske Plaat, Erasmus University, [email protected] Jonathan Schaeffer, University of Alberta, [email protected] Wim Pijls, Erasmus University, [email protected] Arie de Bruin, Erasmus University, [email protected] Erasmus University, University of Alberta, Department of Computer Science, Department of Computing Science, Room H4-31, P.O. Box 1738, 615 General Services Building, 3000 DR Rotterdam, Edmonton, Alberta, The Netherlands Canada T6G 2H1 Abstract Most practitioners use a variant of the Alpha-Beta algorithm, a simple depth-®rst pro- cedure, for searching minimax trees. SSS*, with its best-®rst search strategy, reportedly offers the potential for more ef®cient search. However, the complex formulation of the al- gorithm and its alleged excessive memory requirements preclude its use in practice. For two decades, the search ef®ciency of ªsmartº best-®rst SSS* has cast doubt on the effectiveness of ªdumbº depth-®rst Alpha-Beta. This paper presents a simple framework for calling Alpha-Beta that allows us to create a variety of algorithms, including SSS* and DUAL*. In effect, we formulate a best-®rst algorithm using depth-®rst search. Expressed in this framework SSS* is just a special case of Alpha-Beta, solving all of the perceived drawbacks of the algorithm. In practice, Alpha-Beta variants typically evaluate less nodes than SSS*. A new instance of this framework, MTD(ƒ), out-performs SSS* and NegaScout, the Alpha-Beta variant of choice by practitioners. 1 Introduction Game playing is one of the classic problems of arti®cial intelligence. -
1 Introduction 2 Dijkstra's Algorithm
15-451/651: Design & Analysis of Algorithms September 3, 2019 Lecture #3: Dynamic Programming II last changed: August 30, 2019 In this lecture we continue our discussion of dynamic programming, focusing on using it for a variety of path-finding problems in graphs. Topics in this lecture include: • The Bellman-Ford algorithm for single-source (or single-sink) shortest paths. • Matrix-product algorithms for all-pairs shortest paths. • Algorithms for all-pairs shortest paths, including Floyd-Warshall and Johnson. • Dynamic programming for the Travelling Salesperson Problem (TSP). 1 Introduction As a reminder of basic terminology: a graph is a set of nodes or vertices, with edges between some of the nodes. We will use V to denote the set of vertices and E to denote the set of edges. If there is an edge between two vertices, we call them neighbors. The degree of a vertex is the number of neighbors it has. Unless otherwise specified, we will not allow self-loops or multi-edges (multiple edges between the same pair of nodes). As is standard with discussing graphs, we will use n = jV j, and m = jEj, and we will let V = f1; : : : ; ng. The above describes an undirected graph. In a directed graph, each edge now has a direction (and as we said earlier, we will sometimes call the edges in a directed graph arcs). For each node, we can now talk about out-neighbors (and out-degree) and in-neighbors (and in-degree). In a directed graph you may have both an edge from u to v and an edge from v to u. -
Language and Compiler Support for Dynamic Code Generation by Massimiliano A
Language and Compiler Support for Dynamic Code Generation by Massimiliano A. Poletto S.B., Massachusetts Institute of Technology (1995) M.Eng., Massachusetts Institute of Technology (1995) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 1999 © Massachusetts Institute of Technology 1999. All rights reserved. A u th or ............................................................................ Department of Electrical Engineering and Computer Science June 23, 1999 Certified by...............,. ...*V .,., . .* N . .. .*. *.* . -. *.... M. Frans Kaashoek Associate Pro essor of Electrical Engineering and Computer Science Thesis Supervisor A ccepted by ................ ..... ............ ............................. Arthur C. Smith Chairman, Departmental CommitteA on Graduate Students me 2 Language and Compiler Support for Dynamic Code Generation by Massimiliano A. Poletto Submitted to the Department of Electrical Engineering and Computer Science on June 23, 1999, in partial fulfillment of the requirements for the degree of Doctor of Philosophy Abstract Dynamic code generation, also called run-time code generation or dynamic compilation, is the cre- ation of executable code for an application while that application is running. Dynamic compilation can significantly improve the performance of software by giving the compiler access to run-time infor- mation that is not available to a traditional static compiler. A well-designed programming interface to dynamic compilation can also simplify the creation of important classes of computer programs. Until recently, however, no system combined efficient dynamic generation of high-performance code with a powerful and portable language interface. This thesis describes a system that meets these requirements, and discusses several applications of dynamic compilation.