An Ownership Policy and Deadlock Detector for Promises
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Deadlock: Why Does It Happen? CS 537 Andrea C
UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department Deadlock: Why does it happen? CS 537 Andrea C. Arpaci-Dusseau Introduction to Operating Systems Remzi H. Arpaci-Dusseau Informal: Every entity is waiting for resource held by another entity; none release until it gets what it is Deadlock waiting for Questions answered in this lecture: What are the four necessary conditions for deadlock? How can deadlock be prevented? How can deadlock be avoided? How can deadlock be detected and recovered from? Deadlock Example Deadlock Example Two threads access two shared variables, A and B int A, B; Variable A is protected by lock x, variable B by lock y lock_t x, y; How to add lock and unlock statements? Thread 1 Thread 2 int A, B; lock(x); lock(y); A += 10; B += 10; lock(y); lock(x); Thread 1 Thread 2 B += 20; A += 20; A += 10; B += 10; A += B; A += B; B += 20; A += 20; unlock(y); unlock(x); A += B; A += B; A += 30; B += 30; A += 30; B += 30; unlock(x); unlock(y); What can go wrong?? 1 Representing Deadlock Conditions for Deadlock Two common ways of representing deadlock Mutual exclusion • Vertices: • Resource can not be shared – Threads (or processes) in system – Resources (anything of value, including locks and semaphores) • Requests are delayed until resource is released • Edges: Indicate thread is waiting for the other Hold-and-wait Wait-For Graph Resource-Allocation Graph • Thread holds one resource while waits for another No preemption “waiting for” wants y held by • Resources are released voluntarily after completion T1 T2 T1 T2 Circular -
The Dining Philosophers Problem Cache Memory
The Dining Philosophers Problem Cache Memory 254 The dining philosophers problem: definition It is an artificial problem widely used to illustrate the problems linked to resource sharing in concurrent programming. The problem is usually described as follows. • A given number of philosopher are seated at a round table. • Each of the philosophers shares his time between two activities: thinking and eating. • To think, a philosopher does not need any resources; to eat he needs two pieces of silverware. 255 • However, the table is set in a very peculiar way: between every pair of adjacent plates, there is only one fork. • A philosopher being clumsy, he needs two forks to eat: the one on his right and the one on his left. • It is thus impossible for a philosopher to eat at the same time as one of his neighbors: the forks are a shared resource for which the philosophers are competing. • The problem is to organize access to these shared resources in such a way that everything proceeds smoothly. 256 The dining philosophers problem: illustration f4 P4 f0 P3 f3 P0 P2 P1 f1 f2 257 The dining philosophers problem: a first solution • This first solution uses a semaphore to model each fork. • Taking a fork is then done by executing a operation wait on the semaphore, which suspends the process if the fork is not available. • Freeing a fork is naturally done with a signal operation. 258 /* Definitions and global initializations */ #define N = ? /* number of philosophers */ semaphore fork[N]; /* semaphores modeling the forks */ int j; for (j=0, j < N, j++) fork[j]=1; Each philosopher (0 to N-1) corresponds to a process executing the following procedure, where i is the number of the philosopher. -
CSC 553 Operating Systems Multiple Processes
CSC 553 Operating Systems Lecture 4 - Concurrency: Mutual Exclusion and Synchronization Multiple Processes • Operating System design is concerned with the management of processes and threads: • Multiprogramming • Multiprocessing • Distributed Processing Concurrency Arises in Three Different Contexts: • Multiple Applications – invented to allow processing time to be shared among active applications • Structured Applications – extension of modular design and structured programming • Operating System Structure – OS themselves implemented as a set of processes or threads Key Terms Related to Concurrency Principles of Concurrency • Interleaving and overlapping • can be viewed as examples of concurrent processing • both present the same problems • Uniprocessor – the relative speed of execution of processes cannot be predicted • depends on activities of other processes • the way the OS handles interrupts • scheduling policies of the OS Difficulties of Concurrency • Sharing of global resources • Difficult for the OS to manage the allocation of resources optimally • Difficult to locate programming errors as results are not deterministic and reproducible Race Condition • Occurs when multiple processes or threads read and write data items • The final result depends on the order of execution – the “loser” of the race is the process that updates last and will determine the final value of the variable Operating System Concerns • Design and management issues raised by the existence of concurrency: • The OS must: – be able to keep track of various processes -
Supervision 1: Semaphores, Generalised Producer-Consumer, and Priorities
Concurrent and Distributed Systems - 2015–2016 Supervision 1: Semaphores, generalised producer-consumer, and priorities Q0 Semaphores (a) Counting semaphores are initialised to a value — 0, 1, or some arbitrary n. For each case, list one situation in which that initialisation would make sense. (b) Write down two fragments of pseudo-code, to be run in two different threads, that experience deadlock as a result of poor use of mutual exclusion. (c) Deadlock is not limited to mutual exclusion; it can occur any time its preconditions (especially hold-and-wait, cyclic dependence) occur. Describe a situation in which two threads making use of semaphores for condition synchronisation (e.g., in producer-consumer) can deadlock. int buffer[N]; int in = 0, out = 0; spaces = new Semaphore(N); items = new Semaphore(0); guard = new Semaphore(1); // for mutual exclusion // producer threads while(true) { item = produce(); wait(spaces); wait(guard); buffer[in] = item; in = (in + 1) % N; signal(guard); signal(items); } // consumer threads while(true) { wait(items); wait(guard); item = buffer[out]; out =(out+1) % N; signal(guard); signal(spaces); consume(item); } Figure 1: Pseudo-code for a producer-consumer queue using semaphores. 1 (d) In Figure 1, items and spaces are used for condition synchronisation, and guard is used for mutual exclusion. Why will this implementation become unsafe in the presence of multiple consumer threads or multiple producer threads, if we remove guard? (e) Semaphores are introduced in part to improve efficiency under contention around critical sections by preferring blocking to spinning. Describe a situation in which this might not be the case; more generally, under what circumstances will semaphores hurt, rather than help, performance? (f) The implementation of semaphores themselves depends on two classes of operations: in- crement/decrement of an integer, and blocking/waking up threads. -
Composable Concurrency Models
Composable Concurrency Models Dan Stelljes Division of Science and Mathematics University of Minnesota, Morris Morris, Minnesota, USA 56267 [email protected] ABSTRACT Given that an application is likely to make use of more The need to manage concurrent operations in applications than one concurrency model, programmers would prefer that has led to the development of a variety of concurrency mod- different types of models could safely interact. However, dif- els. Modern programming languages generally provide sev- ferent models do not necessarily work well together, nor are eral concurrency models to serve different requirements, and they designed to. Recent work has attempted to identify programmers benefit from being able to use them in tandem. common \building blocks" that could be used to compose We discuss challenges surrounding concurrent programming a variety of models, possibly eliminating subtle problems and examine situations in which conflicts between models when different models interact and allowing models to be can occur. Additionally, we describe attempts to identify represented at lower levels without resorting to rough ap- features of common concurrency models and develop lower- proximations [9, 11, 12]. level abstractions capable of supporting a variety of models. 2. BACKGROUND 1. INTRODUCTION In a concurrent program, the history of operations may Most interactive computer programs depend on concur- not be the same for every execution. An entirely sequen- rency, the ability to perform different tasks at the same tial program could be proved to be correct by showing that time. A web browser, for instance, might at any point be its history (that is, the sequence in which its operations are rendering documents in multiple tabs, transferring files, and performed) always yields a correct result. -
Q1. Multiple Producers and Consumers
CS39002: Operating Systems Lab. Assignment 5 Floating date: 29/2/2016 Due date: 14/3/2016 Q1. Multiple Producers and Consumers Problem Definition: You have to implement a system which ensures synchronisation in a producer-consumer scenario. You also have to demonstrate deadlock condition and provide solutions for avoiding deadlock. In this system a main process creates 5 producer processes and 5 consumer processes who share 2 resources (queues). The producer's job is to generate a piece of data, put it into the queue and repeat. At the same time, the consumer process consumes the data i.e., removes it from the queue. In the implementation, you are asked to ensure synchronization and mutual exclusion. For instance, the producer should be stopped if the buffer is full and that the consumer should be blocked if the buffer is empty. You also have to enforce mutual exclusion while the processes are trying to acquire the resources. Manager (manager.c): It is the main process that creates the producers and consumer processes (5 each). After that it periodically checks whether the system is in deadlock. Deadlock refers to a specific condition when two or more processes are each waiting for another to release a resource, or more than two processes are waiting for resources in a circular chain. Implementation : The manager process (manager.c) does the following : i. It creates a file matrix.txt which holds a matrix with 2 rows (number of resources) and 10 columns (ID of producer and consumer processes). Each entry (i, j) of that matrix can have three values: ● 0 => process i has not requested for queue j or released queue j ● 1 => process i requested for queue j ● 2 => process i acquired lock of queue j. -
Reactive Async: Expressive Deterministic Concurrency
Reactive Async: Expressive Deterministic Concurrency Philipp Haller Simon Geries Michael Eichberg Guido Salvaneschi KTH Royal Institute of Technology, Sweden TU Darmstadt, Germany [email protected] feichberg, [email protected] Abstract tion can generate non-deterministic results because of their Concurrent programming is infamous for its difficulty. An unpredictable scheduling. Traditionally, developers address important source of difficulty is non-determinism, stemming these problems by protecting state from concurrent access from unpredictable interleavings of concurrent activities. via synchronisation. Yet, concurrent programming remains Futures and promises are widely-used abstractions that help an art: insufficient synchronisation leads to unsound pro- designing deterministic concurrent programs, although this grams but synchronising too much does not exploit hardware property cannot be guaranteed statically in mainstream pro- capabilities effectively for parallel execution. gramming languages. Deterministic-by-construction con- Over the years researchers have proposed concurrency current programming models avoid this issue, but they typi- models that attempt to overcome these issues. For exam- cally restrict expressiveness in important ways. ple the actor model [13] encapsulates state into actors This paper introduces a concurrent programming model, which communicate via asynchronous messages–a solution Reactive Async, which decouples concurrent computations that avoids shared state and hence (low-level) race condi- using -
A Reduction Semantics for Direct-Style Asynchronous Observables
Journal of Logical and Algebraic Methods in Programming 105 (2019) 75–111 Contents lists available at ScienceDirect Journal of Logical and Algebraic Methods in Programming www.elsevier.com/locate/jlamp A reduction semantics for direct-style asynchronous observables ∗ Philipp Haller a, , Heather Miller b a KTH Royal Institute of Technology, Sweden b Carnegie Mellon University, USA a r t i c l e i n f o a b s t r a c t Article history: Asynchronous programming has gained in importance, not only due to hardware develop- Received 31 January 2016 ments like multi-core processors, but also due to pervasive asynchronicity in client-side Received in revised form 6 March 2019 Web programming and large-scale Web applications. However, asynchronous program- Accepted 6 March 2019 ming is challenging. For example, control-flow management and error handling are much Available online 18 March 2019 more complex in an asynchronous than a synchronous context. Programming with asyn- chronous event streams is especially difficult: expressing asynchronous stream producers and consumers requires explicit state machines in continuation-passing style when using widely-used languages like Java. In order to address this challenge, recent language designs like Google’s Dart introduce asynchronous generators which allow expressing complex asynchronous programs in a familiar blocking style while using efficient non-blocking concurrency control under the hood. However, several issues remain unresolved, including the integration of analogous constructs into statically-typed languages, and the formalization and proof of important correctness properties. This paper presents a design for asynchronous stream generators for Scala, thereby ex- tending previous facilities for asynchronous programming in Scala from tasks/futures to asynchronous streams. -
Deadlock-Free Oblivious Routing for Arbitrary Topologies
Deadlock-Free Oblivious Routing for Arbitrary Topologies Jens Domke Torsten Hoefler Wolfgang E. Nagel Center for Information Services and Blue Waters Directorate Center for Information Services and High Performance Computing National Center for Supercomputing Applications High Performance Computing Technische Universitat¨ Dresden University of Illinois at Urbana-Champaign Technische Universitat¨ Dresden Dresden, Germany Urbana, IL 61801, USA Dresden, Germany [email protected] [email protected] [email protected] Abstract—Efficient deadlock-free routing strategies are cru- network performance without inclusion of the routing al- cial to the performance of large-scale computing systems. There gorithm or the application communication pattern. In reg- are many methods but it remains a challenge to achieve lowest ular operation these values can hardly be achieved due latency and highest bandwidth for irregular or unstructured high-performance networks. We investigate a novel routing to network congestion. The largest gap between real and strategy based on the single-source-shortest-path routing al- idealized performance is often in bisection bandwidth which gorithm and extend it to use virtual channels to guarantee by its definition only considers the topology. The effective deadlock-freedom. We show that this algorithm achieves min- bisection bandwidth [2] is the average bandwidth for routing imal latency and high bandwidth with only a low number messages between random perfect matchings of endpoints of virtual channels and can be implemented in practice. We demonstrate that the problem of finding the minimal number (also known as permutation routing) through the network of virtual channels needed to route a general network deadlock- and thus considers the routing algorithm. -
Scalable Deadlock Detection for Concurrent Programs
Sherlock: Scalable Deadlock Detection for Concurrent Programs Mahdi Eslamimehr Jens Palsberg UCLA, University of California, Los Angeles, USA {mahdi,palsberg}@cs.ucla.edu ABSTRACT One possible schedule of the program lets the first thread We present a new technique to find real deadlocks in con- acquire the lock of A and lets the other thread acquire the current programs that use locks. For 4.5 million lines of lock of B. Now the program is deadlocked: the first thread Java, our technique found almost twice as many real dead- waits for the lock of B, while the second thread waits for locks as four previous techniques combined. Among those, the lock of A. 33 deadlocks happened after more than one million com- Usually a deadlock is a bug and programmers should avoid putation steps, including 27 new deadlocks. We first use a deadlocks. However, programmers may make mistakes so we known technique to find 1275 deadlock candidates and then have a bug-finding problem: provide tool support to find as we determine that 146 of them are real deadlocks. Our tech- many deadlocks as possible in a given program. nique combines previous work on concolic execution with a Researchers have developed many techniques to help find new constraint-based approach that iteratively drives an ex- deadlocks. Some require program annotations that typically ecution towards a deadlock candidate. must be supplied by a programmer; examples include [18, 47, 6, 54, 66, 60, 42, 23, 35]. Other techniques work with unannotated programs and thus they are easier to use. In Categories and Subject Descriptors this paper we focus on techniques that work with unanno- D.2.5 Software Engineering [Testing and Debugging] tated Java programs. -
Routing on the Channel Dependency Graph
TECHNISCHE UNIVERSITÄT DRESDEN FAKULTÄT INFORMATIK INSTITUT FÜR TECHNISCHE INFORMATIK PROFESSUR FÜR RECHNERARCHITEKTUR Routing on the Channel Dependency Graph: A New Approach to Deadlock-Free, Destination-Based, High-Performance Routing for Lossless Interconnection Networks Dissertation zur Erlangung des akademischen Grades Doktor rerum naturalium (Dr. rer. nat.) vorgelegt von Name: Domke Vorname: Jens geboren am: 12.09.1984 in: Bad Muskau Tag der Einreichung: 30.03.2017 Tag der Verteidigung: 16.06.2017 Gutachter: Prof. Dr. rer. nat. Wolfgang E. Nagel, TU Dresden, Germany Professor Tor Skeie, PhD, University of Oslo, Norway Multicast loops are bad since the same multicast packet will go around and around, inevitably creating a black hole that will destroy the Earth in a fiery conflagration. — OpenSM Source Code iii Abstract In the pursuit for ever-increasing compute power, and with Moore’s law slowly coming to an end, high- performance computing started to scale-out to larger systems. Alongside the increasing system size, the interconnection network is growing to accommodate and connect tens of thousands of compute nodes. These networks have a large influence on total cost, application performance, energy consumption, and overall system efficiency of the supercomputer. Unfortunately, state-of-the-art routing algorithms, which define the packet paths through the network, do not utilize this important resource efficiently. Topology- aware routing algorithms become increasingly inapplicable, due to irregular topologies, which either are irregular by design, or most often a result of hardware failures. Exchanging faulty network components potentially requires whole system downtime further increasing the cost of the failure. This management approach becomes more and more impractical due to the scale of today’s networks and the accompanying steady decrease of the mean time between failures. -
Abstract Interface Behavior of an Object-Oriented Language with Futures and Promises
Abstract interface behavior of an object-oriented language with futures and promises 3. September 2007 Erika Abrah´ am´ 1, and Immo Grabe2, and Andreas Gruner¨ 2, and Martin Steffen3 1 Albert-Ludwigs-University Freiburg, Germany 2 Christian-Albrechts-University Kiel, Germany 3 University of Oslo, Norway 1 Motivation How to marry concurrency and object-orientation has been a long-standing issue; see e.g., [2] for an early discussion of different design choices. Recently, the thread-based model of concurrency, prominently represented by languages like Java and C# has been criticized, especially in the context of component-based software development. As the word indicates, components are (software) artifacts intended for composition, i.e., open systems, interacting with a surrounding environment. To compare different concurrency models on a solid mathematical basis, a semantical description of the interface behavior is needed, and this is what we do in this work. We present an open semantics for a core of the Creol language [4,7], an object-oriented, concurrent language, featuring in particular asynchronous method calls and (since recently [5]) future-based concurrency. Futures and promises A future, very generally, represents a result yet to be computed. It acts as a proxy for, or reference to, the delayed result from a given sequential piece of code (e.g., a method or a function body in an object-oriented, resp. a functional setting). As the client of the delayed result can proceed its own execution until it actually needs the result, futures provide a natural, lightweight, and (in a functional setting) transparent mechanism to introduce parallelism into a language.