Capability-Based Type Systems for Concurrency Control
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Fast and Correct Load-Link/Store-Conditional Instruction Handling in DBT Systems
Fast and Correct Load-Link/Store-Conditional Instruction Handling in DBT Systems Abstract Atomic Read Memory Dynamic Binary Translation (DBT) requires the implemen- Operation tation of load-link/store-conditional (LL/SC) primitives for guest systems that rely on this form of synchronization. When Equals targeting e.g. x86 host systems, LL/SC guest instructions are YES NO typically emulated using atomic Compare-and-Swap (CAS) expected value? instructions on the host. Whilst this direct mapping is efficient, this approach is problematic due to subtle differences between Write new value LL/SC and CAS semantics. In this paper, we demonstrate that to memory this is a real problem, and we provide code examples that fail to execute correctly on QEMU and a commercial DBT system, Exchanged = true Exchanged = false which both use the CAS approach to LL/SC emulation. We then develop two novel and provably correct LL/SC emula- tion schemes: (1) A purely software based scheme, which uses the DBT system’s page translation cache for correctly se- Figure 1: The compare-and-swap instruction atomically reads lecting between fast, but unsynchronized, and slow, but fully from a memory address, and compares the current value synchronized memory accesses, and (2) a hardware acceler- with an expected value. If these values are equal, then a new ated scheme that leverages hardware transactional memory value is written to memory. The instruction indicates (usually (HTM) provided by the host. We have implemented these two through a return register or flag) whether or not the value was schemes in the Synopsys DesignWare® ARC® nSIM DBT successfully written. -
Active Object Systems Redacted for Privacy Abstract Approved
AN ABSTRACT OF THE THESIS OF Sungwoon Choi for the degree of Doctor of Philosophyin Computer Science presented on February 6, 1992. Title: Active Object Systems Redacted for Privacy Abstract approved. v '" Toshimi Minoura An active object system isa transition-based object-oriented system suitable for the design of various concurrentsystems. An AOS consists of a collection of interacting objects, where the behavior of eachobject is determined by the transition statements provided in the class of that object.A transition statement is a condition-action pair, an equational assignmentstatement, or an event routine. The transition statements provided for eachobject can access, besides the state of that object, the states of the other objectsknown to it through its interface variables. Interface variablesare bound to objects when objects are instantiated so that desired connections among objects are established. The major benefit of the AOS approach is thatan active system can be hierarchically composed from its active software componentsas if it were a hardware system. An AOS provides better encapsulation andmore flexible communication protocols than ordinary object oriented systems, since control withinan AOS is localized. ©Copyright by Sungwoon Choi February 6, 1992 All Rights Reserved Active Object Systems by Sungwoon Choi A THESIS submitted to Oregon State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy Completed February 6, 1992 Commencement June 1992 APPROVED: Redacted for Privacy Professor of Computer Science in charge ofmajor Redacted for Privacy Head of Department of Computer Science Redacted for Privacy Dean of Gradu to School Or Date thesis presented February 6, 1992 Typed by Sungwoon Choi for Sungwoon Choi To my wife Yejung ACKNOWLEDGEMENTS I would like to express my sincere gratitude tomy major professor, Dr. -
Introduction to Multi-Threading and Vectorization Matti Kortelainen Larsoft Workshop 2019 25 June 2019 Outline
Introduction to multi-threading and vectorization Matti Kortelainen LArSoft Workshop 2019 25 June 2019 Outline Broad introductory overview: • Why multithread? • What is a thread? • Some threading models – std::thread – OpenMP (fork-join) – Intel Threading Building Blocks (TBB) (tasks) • Race condition, critical region, mutual exclusion, deadlock • Vectorization (SIMD) 2 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization Motivations for multithreading Image courtesy of K. Rupp 3 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization Motivations for multithreading • One process on a node: speedups from parallelizing parts of the programs – Any problem can get speedup if the threads can cooperate on • same core (sharing L1 cache) • L2 cache (may be shared among small number of cores) • Fully loaded node: save memory and other resources – Threads can share objects -> N threads can use significantly less memory than N processes • If smallest chunk of data is so big that only one fits in memory at a time, is there any other option? 4 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization What is a (software) thread? (in POSIX/Linux) • “Smallest sequence of programmed instructions that can be managed independently by a scheduler” [Wikipedia] • A thread has its own – Program counter – Registers – Stack – Thread-local memory (better to avoid in general) • Threads of a process share everything else, e.g. – Program code, constants – Heap memory – Network connections – File handles -
CSC 553 Operating Systems Multiple Processes
CSC 553 Operating Systems Lecture 4 - Concurrency: Mutual Exclusion and Synchronization Multiple Processes • Operating System design is concerned with the management of processes and threads: • Multiprogramming • Multiprocessing • Distributed Processing Concurrency Arises in Three Different Contexts: • Multiple Applications – invented to allow processing time to be shared among active applications • Structured Applications – extension of modular design and structured programming • Operating System Structure – OS themselves implemented as a set of processes or threads Key Terms Related to Concurrency Principles of Concurrency • Interleaving and overlapping • can be viewed as examples of concurrent processing • both present the same problems • Uniprocessor – the relative speed of execution of processes cannot be predicted • depends on activities of other processes • the way the OS handles interrupts • scheduling policies of the OS Difficulties of Concurrency • Sharing of global resources • Difficult for the OS to manage the allocation of resources optimally • Difficult to locate programming errors as results are not deterministic and reproducible Race Condition • Occurs when multiple processes or threads read and write data items • The final result depends on the order of execution – the “loser” of the race is the process that updates last and will determine the final value of the variable Operating System Concerns • Design and management issues raised by the existence of concurrency: • The OS must: – be able to keep track of various processes -
An Efficient Implementation of Guard-Based Synchronization for an Object-Oriented Programming Language an Efficient Implementation of Guard-Based
AN EFFICIENT IMPLEMENTATION OF GUARD-BASED SYNCHRONIZATION FOR AN OBJECT-ORIENTED PROGRAMMING LANGUAGE AN EFFICIENT IMPLEMENTATION OF GUARD-BASED SYNCHRONIZATION FOR AN OBJECT-ORIENTED PROGRAMMING LANGUAGE By SHUCAI YAO, M.Sc., B.Sc. A Thesis Submitted to the Department of Computing and Software and the School of Graduate Studies of McMaster University in Partial Fulfilment of the Requirements for the Degree of Doctor of Philosophy McMaster University c Copyright by Shucai Yao, July 2020 Doctor of Philosophy (2020) McMaster University (Computing and Software) Hamilton, Ontario, Canada TITLE: An Efficient Implementation of Guard-Based Synchro- nization for an Object-Oriented Programming Language AUTHOR: Shucai Yao M.Sc., (Computer Science) University of Science and Technology Beijing B.Sc., (Computer Science) University of Science and Technology Beijing SUPERVISOR: Dr. Emil Sekerinski, Dr. William M. Farmer NUMBER OF PAGES: xvii,167 ii To my beloved family Abstract Object-oriented programming has had a significant impact on software development because it provides programmers with a clear structure of a large system. It encap- sulates data and operations into objects, groups objects into classes and dynamically binds operations to program code. With the emergence of multi-core processors, application developers have to explore concurrent programming to take full advan- tage of multi-core technology. However, when it comes to concurrent programming, object-oriented programming remains elusive as a useful programming tool. Most object-oriented programming languages do have some extensions for con- currency, but concurrency is implemented independently of objects: for example, concurrency in Java is managed separately with the Thread object. We employ a programming model called Lime that combines action systems tightly with object- oriented programming and implements concurrency by extending classes with actions and guarded methods. -
Towards Gradually Typed Capabilities in the Pi-Calculus
Towards Gradually Typed Capabilities in the Pi-Calculus Matteo Cimini University of Massachusetts Lowell Lowell, MA, USA matteo [email protected] Gradual typing is an approach to integrating static and dynamic typing within the same language, and puts the programmer in control of which regions of code are type checked at compile-time and which are type checked at run-time. In this paper, we focus on the π-calculus equipped with types for the modeling of input-output capabilities of channels. We present our preliminary work towards a gradually typed version of this calculus. We present a type system, a cast insertion procedure that automatically inserts run-time checks, and an operational semantics of a π-calculus that handles casts on channels. Although we do not claim any theoretical results on our formulations, we demonstrate our calculus with an example and discuss our future plans. 1 Introduction This paper presents preliminary work on integrating dynamically typed features into a typed π-calculus. Pierce and Sangiorgi have defined a typed π-calculus in which channels can be assigned types that express input and output capabilities [16]. Channels can be declared to be input-only, output-only, or that can be used for both input and output operations. As a consequence, this type system can detect unintended or malicious channel misuses at compile-time. The static typing nature of the type system prevents the modeling of scenarios in which the input- output discipline of a channel is discovered at run-time. In these scenarios, such a discipline must be enforced with run-time type checks in the style of dynamic typing. -
Active Object
Active Object an Object Behavioral Pattern for Concurrent Programming R. Greg Lavender Douglas C. Schmidt [email protected] [email protected] ISODE Consortium Inc. Department of Computer Science Austin, TX Washington University, St. Louis An earlier version of this paper appeared in a chapter in the book ªPattern Languages of Program Design 2º ISBN 0-201-89527-7, edited by John Vlissides, Jim Coplien, and Norm Kerth published by Addison-Wesley, 1996. : Output : Input Handler Handler : Routing Table : Message Abstract Queue This paper describes the Active Object pattern, which decou- 3: enqueue(msg) ples method execution from method invocation in order to : Output 2: find_route(msg) simplify synchronized access to a shared resource by meth- Handler ods invoked in different threads of control. The Active Object : Message : Input pattern allows one or more independent threads of execution Queue Handler 1: recv(msg) to interleave their access to data modeled as a single ob- ject. A broad class of producer/consumer and reader/writer OUTGOING OUTGOING MESSAGES GATEWAY problems are well-suited to this model of concurrency. This MESSAGES pattern is commonly used in distributed systems requiring INCOMING INCOMING multi-threaded servers. In addition,client applications (such MESSAGES MESSAGES as windowing systems and network browsers), are increas- DST DST ingly employing active objects to simplify concurrent, asyn- SRC SRC chronous network operations. 1 Intent Figure 1: Connection-Oriented Gateway The Active Object pattern decouples method execution from method invocation in order to simplify synchronized access to a shared resource by methods invoked in different threads [2]. Sources and destinationscommunicate with the Gateway of control. -
Mastering Concurrent Computing Through Sequential Thinking
review articles DOI:10.1145/3363823 we do not have good tools to build ef- A 50-year history of concurrency. ficient, scalable, and reliable concur- rent systems. BY SERGIO RAJSBAUM AND MICHEL RAYNAL Concurrency was once a specialized discipline for experts, but today the chal- lenge is for the entire information tech- nology community because of two dis- ruptive phenomena: the development of Mastering networking communications, and the end of the ability to increase processors speed at an exponential rate. Increases in performance come through concur- Concurrent rency, as in multicore architectures. Concurrency is also critical to achieve fault-tolerant, distributed services, as in global databases, cloud computing, and Computing blockchain applications. Concurrent computing through sequen- tial thinking. Right from the start in the 1960s, the main way of dealing with con- through currency has been by reduction to se- quential reasoning. Transforming problems in the concurrent domain into simpler problems in the sequential Sequential domain, yields benefits for specifying, implementing, and verifying concur- rent programs. It is a two-sided strategy, together with a bridge connecting the Thinking two sides. First, a sequential specificationof an object (or service) that can be ac- key insights ˽ A main way of dealing with the enormous challenges of building concurrent systems is by reduction to sequential I must appeal to the patience of the wondering readers, thinking. Over more than 50 years, more sophisticated techniques have been suffering as I am from the sequential nature of human developed to build complex systems in communication. this way. 12 ˽ The strategy starts by designing —E.W. -
Enhanced Debugging for Data Races in Parallel
ENHANCED DEBUGGING FOR DATA RACES IN PARALLEL PROGRAMS USING OPENMP A Thesis Presented to the Faculty of the Department of Computer Science University of Houston In Partial Fulfillment of the Requirements for the Degree Master of Science By Marcus W. Hervey December 2016 ENHANCED DEBUGGING FOR DATA RACES IN PARALLEL PROGRAMS USING OPENMP Marcus W. Hervey APPROVED: Dr. Edgar Gabriel, Chairman Dept. of Computer Science Dr. Shishir Shah Dept. of Computer Science Dr. Barbara Chapman Dept. of Computer Science, Stony Brook University Dean, College of Natural Sciences and Mathematics ii iii Acknowledgements A special thank you to Dr. Barbara Chapman, Ph.D., Dr. Edgar Gabriel, Ph.D., Dr. Shishir Shah, Ph.D., Dr. Lei Huang, Ph.D., Dr. Chunhua Liao, Ph.D., Dr. Laksano Adhianto, Ph.D., and Dr. Oscar Hernandez, Ph.D. for their guidance and support throughout this endeavor. I would also like to thank Van Bui, Deepak Eachempati, James LaGrone, and Cody Addison for their friendship and teamwork, without which this endeavor would have never been accomplished. I dedicate this thesis to my parents (Billy and Olivia Hervey) who have always chal- lenged me to be my best, and to my wife and kids for their love and sacrifice throughout this process. ”Our greatest weakness lies in giving up. The most certain way to succeed is always to try just one more time.” – Thomas Alva Edison iv ENHANCED DEBUGGING FOR DATA RACES IN PARALLEL PROGRAMS USING OPENMP An Abstract of a Thesis Presented to the Faculty of the Department of Computer Science University of Houston In Partial Fulfillment of the Requirements for the Degree Master of Science By Marcus W. -
A Separation Logic for Refining Concurrent Objects
A Separation Logic for Refining Concurrent Objects Aaron Turon ∗ Mitchell Wand Northeastern University {turon, wand}@ccs.neu.edu Abstract do { tmp = *C; } until CAS(C, tmp, tmp+1); Fine-grained concurrent data structures are crucial for gaining per- implements inc using an optimistic approach: it takes a snapshot formance from multiprocessing, but their design is a subtle art. Re- of the counter without acquiring a lock, computes the new value cent literature has made large strides in verifying these data struc- of the counter, and uses compare-and-set (CAS) to safely install the tures, using either atomicity refinement or separation logic with new value. The key is that CAS compares *C with the value of tmp, rely-guarantee reasoning. In this paper we show how the owner- atomically updating *C with tmp + 1 and returning true if they ship discipline of separation logic can be used to enable atomicity are the same, and just returning false otherwise. refinement, and we develop a new rely-guarantee method that is lo- Even for this simple data structure, the fine-grained imple- calized to the definition of a data structure. We present the first se- mentation significantly outperforms the lock-based implementa- mantics of separation logic that is sensitive to atomicity, and show tion [17]. Likewise, even for this simple example, we would prefer how to control this sensitivity through ownership. The result is a to think of the counter in a more abstract way when reasoning about logic that enables compositional reasoning about atomicity and in- its clients, giving it the following specification: terference, even for programs that use fine-grained synchronization and dynamic memory allocation. -
Reconciling Real and Stochastic Time: the Need for Probabilistic Refinement
DOI 10.1007/s00165-012-0230-y The Author(s) © 2012. This article is published with open access at Springerlink.com Formal Aspects Formal Aspects of Computing (2012) 24: 497–518 of Computing Reconciling real and stochastic time: the need for probabilistic refinement J. Markovski1,P.R.D’Argenio2,J.C.M.Baeten1,3 and E. P. de Vink1,3 1 Eindhoven University of Technology, Eindhoven, The Netherlands. E-mail: [email protected] 2 FaMAF, Universidad Nacional de Cordoba,´ Cordoba,´ Argentina 3 Centrum Wiskunde & Informatica, Amsterdam, The Netherlands Abstract. We conservatively extend an ACP-style discrete-time process theory with discrete stochastic delays. The semantics of the timed delays relies on time additivity and time determinism, which are properties that enable us to merge subsequent timed delays and to impose their synchronous expiration. Stochastic delays, however, interact with respect to a so-called race condition that determines the set of delays that expire first, which is guided by an (implicit) probabilistic choice. The race condition precludes the property of time additivity as the merger of stochastic delays alters this probabilistic behavior. To this end, we resolve the race condition using conditionally-distributed unit delays. We give a sound and ground-complete axiomatization of the process the- ory comprising the standard set of ACP-style operators. In this generalized setting, the alternative composition is no longer associative, so we have to resort to special normal forms that explicitly resolve the underlying race condition. Our treatment succeeds in the initial challenge to conservatively extend standard time with stochastic time. However, the ‘dissection’ of the stochastic delays to conditionally-distributed unit delays comes at a price, as we can no longer relate the resolved race condition to the original stochastic delays. -
Lock-Free Programming
Lock-Free Programming Geoff Langdale L31_Lockfree 1 Desynchronization ● This is an interesting topic ● This will (may?) become even more relevant with near ubiquitous multi-processing ● Still: please don’t rewrite any Project 3s! L31_Lockfree 2 Synchronization ● We received notification via the web form that one group has passed the P3/P4 test suite. Congratulations! ● We will be releasing a version of the fork-wait bomb which doesn't make as many assumptions about task id's. – Please look for it today and let us know right away if it causes any trouble for you. ● Personal and group disk quotas have been grown in order to reduce the number of people running out over the weekend – if you try hard enough you'll still be able to do it. L31_Lockfree 3 Outline ● Problems with locking ● Definition of Lock-free programming ● Examples of Lock-free programming ● Linux OS uses of Lock-free data structures ● Miscellanea (higher-level constructs, ‘wait-freedom’) ● Conclusion L31_Lockfree 4 Problems with Locking ● This list is more or less contentious, not equally relevant to all locking situations: – Deadlock – Priority Inversion – Convoying – “Async-signal-safety” – Kill-tolerant availability – Pre-emption tolerance – Overall performance L31_Lockfree 5 Problems with Locking 2 ● Deadlock – Processes that cannot proceed because they are waiting for resources that are held by processes that are waiting for… ● Priority inversion – Low-priority processes hold a lock required by a higher- priority process – Priority inheritance a possible solution L31_Lockfree