Laboratoire d’Informatique Fondamentale de Lille Parallel Metaheuristics: from Design to Implementation Prof. E-G. Talbi Parallel Cooperative Optimization Research Group Motivations High-dimensional and complex optimization problems in many areas of industrial concern → Telecommunication, genomics, transportation, engineering design, … Metaheuristics reduce the computational complexity of search BUT Parallel issue remains important: More and more complex metaheuristics Huge search space CPU or memory intensive of the objective function Motivations Rapid development of technology Processors: Multi-core processors, … Networks (LAN & WAN): Myrinet, Infiniband, Optical networks, … Data storage Increasing ratio performance / cost Goals Speedup the search Æ real-time and interactive optimization methods, dynamic optimisation, … Improve quality of solutions Æ cooperation, better even on a single processor Improve robustness (different instances, design space) Solving large problems Æ instances, accuracy of mathematics models Taxonomy of metaheuristics Metaheuristics Single Population solution Hill Simulated Tabu Evol. Scatter Climbing annealing search algorithms search Solution-based metaheuristics: Hill Climbing, Simulated Annealing, Tabu Search, VNS, … Population-based metaheuristics: Evolutionary Algorithms, Scatter Search, Swarm optimization, … Outline Parallel Metaheuristics: Design issues Parallel Metaheuristics: Implementation issues Hardware platforms, Programming models Performance evaluation Adaptation to Multi-objective Optimization Software frameworks IIlustration : Network design problem Parallel Models for Metaheuristics An unified view for single-based metaheuristics and population based metaheuristics Three major hierarchical models: Algorithm-Level: Independent/Cooperative self-contained metaheuristics Iteration-Level: parallelization of a single step of the metaheuristic (based on distribution of the handled solutions) Solution-Level: parallelization of the processing of a single solution Parallel Models for Metaheuristics Solution-level: Processing of a single solution (Objective / Data partitioning) Algorithm-level: Independent walks, Multi-start model, Hybridization/Cooperation of metaheuristics Iteration-level: Parallel evaluation of the neighborhood/population ScalabilityHPS× × Heuristic Population / Neighborhood Solution Algorithm-level Parallel Model Meta. Independent on the problem Alter the behavior of the Meta metaheuristic Cooperation Design questions: When ? Migration decision: Blind: Periodic or Probabilistic Meta « Intelligent »: state dependent Meta. Where ? Exchange topology: complete graph, ring, torus, random, … Simulated Annealing Which ? Information : elite solutions, Genetic Programming search memory, … Evolution Strategy Tabu search How ? Integration Ant Colonies Scatter search, … Ex : Evolutionary Algorithms (Island Model Distribution of the population in a set of islands in which semi-isolated EAs are executed GA GA GA GA Sparse individual exchanges are performed among these islands with GA GA the goal of introducing more diversity GA GA into the target populations Improvement of robustness and quality Ex : The multi-start model in LS Independent set of Local Searches Improve robustness and quality of solutions May start from the same or different initial solution, population Different parameters encoding, operators, memory sizes (tabu list, …), etc. Flat classification Algorithm-Level Parallel Model Homogeneous Heterogenenous Partial Global Specialist General E-G. Talbi, «A taxonomy of hybrid metaheuristics », Journal of Heuristics, Vol.8, No2, 2002. Homogeneous / Heterogeneous Homogeneous hybrids = Same Metaheuristic Heterogeneous = Different Metaheuristics Example Evolutionary Algorithm Simulated Annealing Communication medium Tabu Search Several different metaheuristics cooperate and co-evolve some solutions Global / Partial Partial : Problem is decomposed in sub-problems. Each algorithm is dedicated to solve one sub- problem Combinatorial / Continuous : Decomposition of the problem, decision space, … Problem dependant: Vehicle routing, Scheduling, … Problem Sub-problem Sub-problem Sub-problem Meta Meta Meta EAs, Tabu search Simulated annealing Synchronization : Build a global viable solution Specialist / General Specialist: Algorithms which solve different problems (ex: Co-evolutionary) Diversifying Agent Intensifying Agent Solutions Promising from solutions unexplored regions Refers to Refers to Adaptative Memory Explored regions Promising regions explored Initial solutions Good space solutions Local Search Agent S. Cahon, N. Melab and E-G. Talbi. COSEARCH: A parallel cooperativ metaheuristic. Journal of Mathematical Modeling and algorithms JMMA, Vol.5(2), 2006. Iteration-level Parallel Model Independent on the problem Complex objective functions (non linear, simulations, …) Don’t alter the behavior of the Meta metaheuristic Æ Speedup the search Solution(s) Full Design questions: fitness Single-solution based algorithms: decomposition of the neighborhood Population based algorithms: decomposition of the population Ex : Single-Solution Based Metaheuristic LS Solution Evaluation of the neighborhood: computationally Partition of the intensive step neighborhood Decomposition of the neighborhood : Asynchronous for SA (# behavior) Synchronous for deterministic algorithms Ex : Population Based Metaheuristic Selection Replacement Decomposition of the E.A. population: Couple of Individuals (EAs), Ants (AS), individuals Particles (PSO), etc. Evaluated offsprings Sharing Information: Pheromone matrix, etc. Asynchronous for steady state GAs (# behavior) Recombination, Synchronous for generational Mutation, GAs evaluation E-G. Talbi, O. Roux, C. Fonlupt, D. Robillard, «Parallel ant colonies for the quadratic assignment problem», Future Generation Computer Systems, 17(4), 2001. Solution-level Parallel Model Aggregation of Dependent on the problem partial fitnesses Don’t alter the behavior of the metaheuristic Æ Speedup the Solution Partial search (CPU or I/O intensive) fitness Synchronous Design questions: Data / Task Decomposition CPU #1 CPU #3 Data : database, geographical area, structure, … Task : sub-functions, solvers, … CPU #2 CPU #4 Outline Parallel Metaheuristics: Design issues Parallel Metaheuristics: Implementation issues Hardware platforms, Programming models Performance evaluation Adaptation to Multi-objective Optimization Software frameworks IIlustration : Network design problem Parallel Metaheuristics: Implementation issues Design of Parallel Metaheuristics Programming Paradigms ParallelParallel Programm Programminging Execution Support EnvironmentsEnvironments Parallel Architecture Hardware P P P P P .. P Main criteria : Memory sharing, Homogeneity, Dedicated, Scalability, Volatility PP Processor Thread Process Shared Memory Machine Memory Easy to program: conventional OS network and programming paradigms CPU CPU CPU Poor Scalability: Increase in processors leads memory contention interconnection network: bus, crossbar, multistage Ex. : Dual-core (Intel, AMD), Origin crossbar (Silicon Graphics), .... Distributed Memory Architectures Good Scalability : Several hundred CPU Memory nodes with a high speed interconnection network/switch network Harder to program Memory CPU Memory CPU Communication cost Interconnection schema : Ex. : Clusters hypercube, (2D or 3D) torus, fat-tree, multistage crossbars Clusters & NOWs Clusters: A collections of PCs interconnected through high speed network, Low cost Standard components NOWs: Network Of Workstations take advantage of unused computing power ccNUMA architectures Mixing SMP and DSM Small number of processors (up to 16) clustered in SMP nodes (fast connection, crossbar for instance) SMPs are connected through a less costly network with poorer performance Memory CPU Memory CPU Memory CPU network network network CPUCPUCPU CPUCPUCPU CPUCPUCPU Interconnection network Periph. Grid Computing “Coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations” High-Performance High-Throughput Computing GRID Computing GRID • Offer a virtual supercomputer Billions of idle PCs … Stealing unused CPU cycles of processors (a mean of 47%) Inexpensive, potentially very powerful but more difficult to program than traditional parallel computers N. Melab, S. Cahon, E-G. Talbi, « Grid computing for parallel bioinspired algorithms», Journal of Parallel and Distributed Computing (JDPC), 66(8), 2006. GRID Platforms Lille Orsay Nancy Rennes Lyon Grenoble Bordeaux Sophia Toulouse HPC Grid: GRID’5000: 9 sites distributed in France and HTC Grid: PlanetLab: 711 inter-connected by Renater nodes on 338 sites over 25 5000 proc: between 500 and 1000 CPUs on each site countries R. Bolze, …, E-G. Talbi, …, «GRID’5000: A large scale and highly reconfigurable Grid», International Journal of High Performance Computing Applications (IJHPCA), Vol.20(4), 2006. Main criteria for an efficient implementation Criteria Shared Homogenous Dedicated Local Volatility Memory / / / network / & Fault Distributed Heterogenous Non Dedi. Large Tolerance Architecture network SMP SM Hom Dedi Local No COW DM Hom Dedi Local No NOW DM Het Non Local Yes HP Grid DM Het Dedi Large No HT Grid DM Het Non Large Yes Parallel Programming Environments Shared-memory: System Category Language binding PThreads Operating system C Java threads Programming language Java OpenMP
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages69 Page
-
File Size-