Embarrassingly Parallel Search in Constraint Programming Arnaud Malapert, Jean-Charles Régin, Mohamed Rezgui
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Deterministic Execution of Multithreaded Applications
DETERMINISTIC EXECUTION OF MULTITHREADED APPLICATIONS FOR RELIABILITY OF MULTICORE SYSTEMS DETERMINISTIC EXECUTION OF MULTITHREADED APPLICATIONS FOR RELIABILITY OF MULTICORE SYSTEMS Proefschrift ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus prof. ir. K. C. A. M. Luyben, voorzitter van het College voor Promoties, in het openbaar te verdedigen op vrijdag 19 juni 2015 om 10:00 uur door Hamid MUSHTAQ Master of Science in Embedded Systems geboren te Karachi, Pakistan Dit proefschrift is goedgekeurd door de promotor: Prof. dr. K. L. M. Bertels Copromotor: Dr. Z. Al-Ars Samenstelling promotiecommissie: Rector Magnificus, voorzitter Prof. dr. K. L. M. Bertels, Technische Universiteit Delft, promotor Dr. Z. Al-Ars, Technische Universiteit Delft, copromotor Independent members: Prof. dr. H. Sips, Technische Universiteit Delft Prof. dr. N. H. G. Baken, Technische Universiteit Delft Prof. dr. T. Basten, Technische Universiteit Eindhoven, Netherlands Prof. dr. L. J. M Rothkrantz, Netherlands Defence Academy Prof. dr. D. N. Pnevmatikatos, Technical University of Crete, Greece Keywords: Multicore, Fault Tolerance, Reliability, Deterministic Execution, WCET The work in this thesis was supported by Artemis through the SMECY project (grant 100230). Cover image: An Artist’s impression of a view of Saturn from its moon Titan. The image was taken from http://www.istockphoto.com and used with permission. ISBN 978-94-6186-487-1 Copyright © 2015 by H. Mushtaq All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means without the prior written permission of the copyright owner. -
Constraint Satisfaction with Preferences Ph.D
MASARYK UNIVERSITY BRNO FACULTY OF INFORMATICS } Û¡¢£¤¥¦§¨ª«¬Æ !"#°±²³´µ·¸¹º»¼½¾¿45<ÝA| Constraint Satisfaction with Preferences Ph.D. Thesis Brno, January 2001 Hana Rudová ii Acknowledgements I would like to thank my supervisor, Ludˇek Matyska, for his continuous support, guidance, and encouragement. I am very grateful for his help and advice, which have allowed me to develop both as a person and as the avid student I am today. I want to thank to my husband and my family for their support, patience, and love during my PhD study and especially during writing of this thesis. This research was supported by the Universities Development Fund of the Czech Repub- lic under contracts # 0748/1998 and # 0407/1999. Declaration I declare that this thesis was composed by myself, and all presented results are my own, unless otherwise stated. Hana Rudová iii iv Contents 1 Introduction 1 1.1 Thesis Outline ..................................... 1 2 Constraint Satisfaction 3 2.1 Constraint Satisfaction Problem ........................... 3 2.2 Optimization Problem ................................ 4 2.3 Solution Methods ................................... 5 2.4 Constraint Programming ............................... 6 2.4.1 Global Constraints .............................. 7 3 Frameworks 11 3.1 Weighted Constraint Satisfaction .......................... 11 3.2 Probabilistic Constraint Satisfaction ........................ 12 3.2.1 Problem Definition .............................. 13 3.2.2 Problems’ Lattice ............................... 13 3.2.3 Solution -
Introduction to High Performance Computing
Introduction to High Performance Computing Gregory G. Howes Department of Physics and Astronomy University of Iowa Iowa High Performance Computing Summer School University of Iowa Iowa City, Iowa 20-22 May 2013 Thank you Ben Rogers Information Technology Services Glenn Johnson Information Technology Services Mary Grabe Information Technology Services Amir Bozorgzadeh Information Technology Services Mike Jenn Information Technology Services Preston Smith Purdue University and National Science Foundation Rosen Center for Advanced Computing, Purdue University Great Lakes Consortium for Petascale Computing This presentation borrows heavily from information freely available on the web by Ian Foster and Blaise Barney (see references) Outline • Introduction • Thinking in Parallel • Parallel Computer Architectures • Parallel Programming Models • References Introduction Disclaimer: High Performance Computing (HPC) is valuable to a variety of applications over a very wide range of fields. Many of my examples will come from the world of physics, but I will try to present them in a general sense Why Use Parallel Computing? • Single processor speeds are reaching their ultimate limits • Multi-core processors and multiple processors are the most promising paths to performance improvements Definition of a parallel computer: A set of independent processors that can work cooperatively to solve a problem. Introduction The March towards Petascale Computing • Computing performance is defined in terms of FLoating-point OPerations per Second (FLOPS) GigaFLOP 1 GF = 109 FLOPS TeraFLOP 1 TF = 1012 FLOPS PetaFLOP 1 PF = 1015 FLOPS • Petascale computing also refers to extremely large data sets PetaByte 1 PB = 1015 Bytes Introduction Performance improves by factor of ~10 every 4 years! Outline • Introduction • Thinking in Parallel • Parallel Computer Architectures • Parallel Programming Models • References Thinking in Parallel DEFINITION Concurrency: The property of a parallel algorithm that a number of operations can be performed by separate processors at the same time. -
CUDA Toolkit 4.2 CURAND Guide
CUDA Toolkit 4.2 CURAND Guide PG-05328-041_v01 | March 2012 Published by NVIDIA Corporation 2701 San Tomas Expressway Santa Clara, CA 95050 Notice ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, "MATERIALS") ARE BEING PROVIDED "AS IS". NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use. No license is granted by implication or otherwise under any patent or patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to change without notice. This publication supersedes and replaces all information previously supplied. NVIDIA Corporation products are not authorized for use as critical components in life support devices or systems without express written approval of NVIDIA Corporation. Trademarks NVIDIA, CUDA, and the NVIDIA logo are trademarks or registered trademarks of NVIDIA Corporation in the United States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Copyright Copyright ©2005-2012 by NVIDIA Corporation. All rights reserved. CUDA Toolkit 4.2 CURAND Guide PG-05328-041_v01 | 1 Portions of the MTGP32 (Mersenne Twister for GPU) library routines are subject to the following copyright: Copyright ©2009, 2010 Mutsuo Saito, Makoto Matsumoto and Hiroshima University. -
Unit: 4 Processes and Threads in Distributed Systems
Unit: 4 Processes and Threads in Distributed Systems Thread A program has one or more locus of execution. Each execution is called a thread of execution. In traditional operating systems, each process has an address space and a single thread of execution. It is the smallest unit of processing that can be scheduled by an operating system. A thread is a single sequence stream within in a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. In a process, threads allow multiple executions of streams. Thread Structure Process is used to group resources together and threads are the entities scheduled for execution on the CPU. The thread has a program counter that keeps track of which instruction to execute next. It has registers, which holds its current working variables. It has a stack, which contains the execution history, with one frame for each procedure called but not yet returned from. Although a thread must execute in some process, the thread and its process are different concepts and can be treated separately. What threads add to the process model is to allow multiple executions to take place in the same process environment, to a large degree independent of one another. Having multiple threads running in parallel in one process is similar to having multiple processes running in parallel in one computer. Figure: (a) Three processes each with one thread. (b) One process with three threads. In former case, the threads share an address space, open files, and other resources. In the latter case, process share physical memory, disks, printers and other resources. -
A Short History of Computational Complexity
The Computational Complexity Column by Lance FORTNOW NEC Laboratories America 4 Independence Way, Princeton, NJ 08540, USA [email protected] http://www.neci.nj.nec.com/homepages/fortnow/beatcs Every third year the Conference on Computational Complexity is held in Europe and this summer the University of Aarhus (Denmark) will host the meeting July 7-10. More details at the conference web page http://www.computationalcomplexity.org This month we present a historical view of computational complexity written by Steve Homer and myself. This is a preliminary version of a chapter to be included in an upcoming North-Holland Handbook of the History of Mathematical Logic edited by Dirk van Dalen, John Dawson and Aki Kanamori. A Short History of Computational Complexity Lance Fortnow1 Steve Homer2 NEC Research Institute Computer Science Department 4 Independence Way Boston University Princeton, NJ 08540 111 Cummington Street Boston, MA 02215 1 Introduction It all started with a machine. In 1936, Turing developed his theoretical com- putational model. He based his model on how he perceived mathematicians think. As digital computers were developed in the 40's and 50's, the Turing machine proved itself as the right theoretical model for computation. Quickly though we discovered that the basic Turing machine model fails to account for the amount of time or memory needed by a computer, a critical issue today but even more so in those early days of computing. The key idea to measure time and space as a function of the length of the input came in the early 1960's by Hartmanis and Stearns. -
Deterministic Parallel Fixpoint Computation
Deterministic Parallel Fixpoint Computation SUNG KOOK KIM, University of California, Davis, U.S.A. ARNAUD J. VENET, Facebook, Inc., U.S.A. ADITYA V. THAKUR, University of California, Davis, U.S.A. Abstract interpretation is a general framework for expressing static program analyses. It reduces the problem of extracting properties of a program to computing an approximation of the least fixpoint of a system of equations. The de facto approach for computing this approximation uses a sequential algorithm based on weak topological order (WTO). This paper presents a deterministic parallel algorithm for fixpoint computation by introducing the notion of weak partial order (WPO). We present an algorithm for constructing a WPO in almost-linear time. Finally, we describe Pikos, our deterministic parallel abstract interpreter, which extends the sequential abstract interpreter IKOS. We evaluate the performance and scalability of Pikos on a suite of 1017 C programs. When using 4 cores, Pikos achieves an average speedup of 2.06x over IKOS, with a maximum speedup of 3.63x. When using 16 cores, Pikos achieves a maximum speedup of 10.97x. CCS Concepts: • Software and its engineering → Automated static analysis; • Theory of computation → Program analysis. Additional Key Words and Phrases: Abstract interpretation, Program analysis, Concurrency ACM Reference Format: Sung Kook Kim, Arnaud J. Venet, and Aditya V. Thakur. 2020. Deterministic Parallel Fixpoint Computation. Proc. ACM Program. Lang. 4, POPL, Article 14 (January 2020), 33 pages. https://doi.org/10.1145/3371082 1 INTRODUCTION Program analysis is a widely adopted approach for automatically extracting properties of the dynamic behavior of programs [Balakrishnan et al. -
Backtracking Search (Csps) ■Chapter 5 5.3 Is About Local Search Which Is a Very Useful Idea but We Won’T Cover It in Class
CSC384: Intro to Artificial Intelligence Backtracking Search (CSPs) ■Chapter 5 5.3 is about local search which is a very useful idea but we won’t cover it in class. 1 Hojjat Ghaderi, University of Toronto Constraint Satisfaction Problems ● The search algorithms we discussed so far had no knowledge of the states representation (black box). ■ For each problem we had to design a new state representation (and embed in it the sub-routines we pass to the search algorithms). ● Instead we can have a general state representation that works well for many different problems. ● We can build then specialized search algorithms that operate efficiently on this general state representation. ● We call the class of problems that can be represented with this specialized representation CSPs---Constraint Satisfaction Problems. ● Techniques for solving CSPs find more practical applications in industry than most other areas of AI. 2 Hojjat Ghaderi, University of Toronto Constraint Satisfaction Problems ●The idea: represent states as a vector of feature values. We have ■ k-features (or variables) ■ Each feature takes a value. Domain of possible values for the variables: height = {short, average, tall}, weight = {light, average, heavy}. ●In CSPs, the problem is to search for a set of values for the features (variables) so that the values satisfy some conditions (constraints). ■ i.e., a goal state specified as conditions on the vector of feature values. 3 Hojjat Ghaderi, University of Toronto Constraint Satisfaction Problems ●Sudoku: ■ 81 variables, each representing the value of a cell. ■ Values: a fixed value for those cells that are already filled in, the values {1-9} for those cells that are empty. -
1 Parallelism
Lecture 23: Parallelism • Administration – Take QUIZ 17 over P&H 7.1-5, before 11:59pm today – Project: Cache Simulator, Due April 29, 2010 • Last Time – On chip communication – How I/O works • Today – Where do architectures exploit parallelism? – What are the implications for programming models? – What are the implications for communication? – … for caching? UTCS 352, Lecture 23 1 Parallelism • What type of parallelism do applications have? • How can computer architectures exploit this parallelism? UTCS 352, Lecture 23 2 1 Granularity of Parallelism • Fine grain instruction level parallelism • Fine grain data parallelism • Coarse grain (data center) parallelism • Multicore parallelism UTCS 352, Lecture 23 3 Fine grain instruction level parallelism • Fine grain instruction level parallelism – Pipelining – Multi-issue (dynamic scheduling & issue) – VLIW (static issue) – Very Long Instruction Word • Each VLIW instruction includes up to N RISC-like operations • Compiler groups up to N independent operations • Architecture issues fixed size VLIW instructions UTCS 352, Lecture 23 4 2 VLIW Example Theory Let N = 4 Theoretically 4 ops/cycle • 8 VLIW = 32 operations in practice In practice, • Compiler cannot fill slots • Memory stalls - load stall - UTCS 352, Lecture 23 5 Multithreading • Performing multiple threads together – Replicate registers, PC, etc. – Fast switching between threads • Fine-grain multithreading – Switch threads after each cycle – Interleave instruction execution – If one thread stalls, others are executed • Medium-grain -
Prolog Lecture 6
Prolog lecture 6 ● Solving Sudoku puzzles ● Constraint Logic Programming ● Natural Language Processing Playing Sudoku 2 Make the problem easier 3 We can model this problem in Prolog using list permutations Each row must be a permutation of [1,2,3,4] Each column must be a permutation of [1,2,3,4] Each 2x2 box must be a permutation of [1,2,3,4] 4 Represent the board as a list of lists [[A,B,C,D], [E,F,G,H], [I,J,K,L], [M,N,O,P]] 5 The sudoku predicate is built from simultaneous perm constraints sudoku( [[X11,X12,X13,X14],[X21,X22,X23,X24], [X31,X32,X33,X34],[X41,X42,X43,X44]]) :- %rows perm([X11,X12,X13,X14],[1,2,3,4]), perm([X21,X22,X23,X24],[1,2,3,4]), perm([X31,X32,X33,X34],[1,2,3,4]), perm([X41,X42,X43,X44],[1,2,3,4]), %cols perm([X11,X21,X31,X41],[1,2,3,4]), perm([X12,X22,X32,X42],[1,2,3,4]), perm([X13,X23,X33,X43],[1,2,3,4]), perm([X14,X24,X34,X44],[1,2,3,4]), %boxes perm([X11,X12,X21,X22],[1,2,3,4]), perm([X13,X14,X23,X24],[1,2,3,4]), perm([X31,X32,X41,X42],[1,2,3,4]), perm([X33,X34,X43,X44],[1,2,3,4]). 6 Scale up in the obvious way to 3x3 7 Brute-force is impractically slow There are very many valid grids: 6670903752021072936960 ≈ 6.671 × 1021 Our current approach does not encode the interrelationships between the constraints For more information on Sudoku enumeration: http://www.afjarvis.staff.shef.ac.uk/sudoku/ 8 Prolog programs can be viewed as constraint satisfaction problems Prolog is limited to the single equality constraint: – two terms must unify We can generalise this to include other types of constraint Doing so leads -
Embarrassingly Parallel
Algorithms PART I: Embarrassingly Parallel HPC Spring 2017 Prof. Robert van Engelen Overview n Ideal parallelism n Master-worker paradigm n Processor farms n Examples ¨ Geometrical transformations of images ¨ Mandelbrot set ¨ Monte Carlo methods n Load balancing of independent tasks n Further reading 3/30/17 HPC 2 Ideal Parallelism n An ideal parallel computation can be immediately divided into completely independent parts ¨ “Embarrassingly parallel” ¨ “Naturally parallel” n No special techniques or algorithms required input P0 P1 P2 P3 result 3/30/17 HPC 3 Ideal Parallelism and the Master-Worker Paradigm n Ideally there is no communication ¨ Maximum speedup n Practical embarrassingly parallel applications have initial communication and (sometimes) a final communication ¨ Master-worker paradigm where master submits jobs to workers ¨ No communications between workers Send initial data P0 P1 P2 P3 Master Collect results 3/30/17 HPC 4 Parallel Tradeoffs n Embarrassingly parallel with perfect load balancing: tcomp = ts / P assuming P workers and sequential execution time ts n Master-worker paradigm gives speedup only if workers have to perform a reasonable amount of work ¨ Sequential time > total communication time + one workers’ time ts > tp = tcomm + tcomp -1 ¨ Speedup SP = ts / tP = P tcomp / (tcomm + tcomp) = P / (r + 1) where r = tcomp / tcomm ¨ Thus SP ® P when r ® ¥ n However, communication tcomm can be expensive ¨ Typically ts < tcomm for small tasks, that is, the time to send/recv data to the workers is more expensive than doing all -
Byzantine Fault Tolerance for Nondeterministic Applications Bo Chen
BYZANTINE FAULT TOLERANCE FOR NONDETERMINISTIC APPLICATIONS BO CHEN Bachelor of Science in Computer Engineering Nanjing University of Science & Technology, China July, 2006 Submitted in partial fulfillment of the requirements for the degree MASTER OF SCIENCE IN COMPUTER ENGINEERING at the CLEVELAND STATE UNIVERSITY December, 2008 The thesis has been approved for the Department of ELECTRICAL AND COMPUTER ENGINEERING and the College of Graduate Studies by ______________________________________________ Thesis Committee Chairperson, Dr. Wenbing Zhao ____________________ Department/Date _____________________________________________ Dr. Yongjian Fu ______________________ Department/Date _____________________________________________ Dr. Ye Zhu ______________________ Department/Date ACKNOWLEDGEMENT First, I must thank my thesis supervisor, Dr. Wenbing Zhao, for his patience, careful thought, insightful commence and constant support during my two years graduate study and my career. I feel very fortunate for having had the chance to work closely with him and this thesis is as much a product of his guidance as it is of my effort. The other member of my thesis committee, Dr. Yongjian Fu and Dr. Ye Zhu suggested many important improvements to this thesis and interesting directions for future work. I greatly appreciate their suggestions. It has been a pleasure to be a graduate student in Secure and Dependable System Group. I want to thank all the group members: Honglei Zhang and Hua Chai for the discussion we had. I also wish to thank Dr. Fuqin Xiong for his help in advising the details of the thesis submission process. I am grateful to my parents for their support and understanding over the years, especially in the month leading up to this thesis. Above all, I want to thank all my friends who made my life great when I was preparing and writing this thesis.