Concurrency in C A
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Thread Management for High Performance Database Systems - Design and Implementation
Nr.: FIN-003-2018 Thread Management for High Performance Database Systems - Design and Implementation Robert Jendersie, Johannes Wuensche, Johann Wagner, Marten Wallewein-Eising, Marcus Pinnecke, Gunter Saake Arbeitsgruppe Database and Software Engineering Fakultät für Informatik Otto-von-Guericke-Universität Magdeburg Nr.: FIN-003-2018 Thread Management for High Performance Database Systems - Design and Implementation Robert Jendersie, Johannes Wuensche, Johann Wagner, Marten Wallewein-Eising, Marcus Pinnecke, Gunter Saake Arbeitsgruppe Database and Software Engineering Technical report (Internet) Elektronische Zeitschriftenreihe der Fakultät für Informatik der Otto-von-Guericke-Universität Magdeburg ISSN 1869-5078 Fakultät für Informatik Otto-von-Guericke-Universität Magdeburg Impressum (§ 5 TMG) Herausgeber: Otto-von-Guericke-Universität Magdeburg Fakultät für Informatik Der Dekan Verantwortlich für diese Ausgabe: Otto-von-Guericke-Universität Magdeburg Fakultät für Informatik Marcus Pinnecke Postfach 4120 39016 Magdeburg E-Mail: [email protected] http://www.cs.uni-magdeburg.de/Technical_reports.html Technical report (Internet) ISSN 1869-5078 Redaktionsschluss: 21.08.2018 Bezug: Otto-von-Guericke-Universität Magdeburg Fakultät für Informatik Dekanat Thread Management for High Performance Database Systems - Design and Implementation Technical Report Robert Jendersie1, Johannes Wuensche2, Johann Wagner1, Marten Wallewein-Eising2, Marcus Pinnecke1, and Gunter Saake1 Database and Software Engineering Group, Otto-von-Guericke University -
Concurrency & Parallel Programming Patterns
Concurrency & Parallel programming patterns Evgeny Gavrin Outline 1. Concurrency vs Parallelism 2. Patterns by groups 3. Detailed overview of parallel patterns 4. Summary 5. Proposal for language Concurrency vs Parallelism ● Parallelism is the simultaneous execution of computations “doing lots of things at once” ● Concurrency is the composition of independently execution processes “dealing with lots of thing at once” Patterns by groups Architectural Patterns These patterns define the overall architecture for a program: ● Pipe-and-filter: view the program as filters (pipeline stages) connected by pipes (channels). Data flows through the filters to take input and transform into output. ● Agent and Repository: a collection of autonomous agents update state managed on their behalf in a central repository. ● Process control: the program is structured analogously to a process control pipeline with monitors and actuators moderating feedback loops and a pipeline of processing stages. ● Event based implicit invocation: The program is a collection of agents that post events they watch for and issue events for other agents. The architecture enforces a high level abstraction so invocation of an agent is implicit; i.e. not hardwired to a specific controlling agent. ● Model-view-controller: An architecture with a central model for the state of the program, a controller that manages the state and one or more agents that export views of the model appropriate to different uses of the model. ● Bulk Iterative (AKA bulk synchronous): A program that proceeds iteratively … update state, check against a termination condition, complete coordination, and proceed to the next iteration. ● Map reduce: the program is represented in terms of two classes of functions. -
Composable Concurrency Models
Composable Concurrency Models Dan Stelljes Division of Science and Mathematics University of Minnesota, Morris Morris, Minnesota, USA 56267 [email protected] ABSTRACT Given that an application is likely to make use of more The need to manage concurrent operations in applications than one concurrency model, programmers would prefer that has led to the development of a variety of concurrency mod- different types of models could safely interact. However, dif- els. Modern programming languages generally provide sev- ferent models do not necessarily work well together, nor are eral concurrency models to serve different requirements, and they designed to. Recent work has attempted to identify programmers benefit from being able to use them in tandem. common \building blocks" that could be used to compose We discuss challenges surrounding concurrent programming a variety of models, possibly eliminating subtle problems and examine situations in which conflicts between models when different models interact and allowing models to be can occur. Additionally, we describe attempts to identify represented at lower levels without resorting to rough ap- features of common concurrency models and develop lower- proximations [9, 11, 12]. level abstractions capable of supporting a variety of models. 2. BACKGROUND 1. INTRODUCTION In a concurrent program, the history of operations may Most interactive computer programs depend on concur- not be the same for every execution. An entirely sequen- rency, the ability to perform different tasks at the same tial program could be proved to be correct by showing that time. A web browser, for instance, might at any point be its history (that is, the sequence in which its operations are rendering documents in multiple tabs, transferring files, and performed) always yields a correct result. -
Reactive Async: Expressive Deterministic Concurrency
Reactive Async: Expressive Deterministic Concurrency Philipp Haller Simon Geries Michael Eichberg Guido Salvaneschi KTH Royal Institute of Technology, Sweden TU Darmstadt, Germany [email protected] feichberg, [email protected] Abstract tion can generate non-deterministic results because of their Concurrent programming is infamous for its difficulty. An unpredictable scheduling. Traditionally, developers address important source of difficulty is non-determinism, stemming these problems by protecting state from concurrent access from unpredictable interleavings of concurrent activities. via synchronisation. Yet, concurrent programming remains Futures and promises are widely-used abstractions that help an art: insufficient synchronisation leads to unsound pro- designing deterministic concurrent programs, although this grams but synchronising too much does not exploit hardware property cannot be guaranteed statically in mainstream pro- capabilities effectively for parallel execution. gramming languages. Deterministic-by-construction con- Over the years researchers have proposed concurrency current programming models avoid this issue, but they typi- models that attempt to overcome these issues. For exam- cally restrict expressiveness in important ways. ple the actor model [13] encapsulates state into actors This paper introduces a concurrent programming model, which communicate via asynchronous messages–a solution Reactive Async, which decouples concurrent computations that avoids shared state and hence (low-level) race condi- using -
A Reduction Semantics for Direct-Style Asynchronous Observables
Journal of Logical and Algebraic Methods in Programming 105 (2019) 75–111 Contents lists available at ScienceDirect Journal of Logical and Algebraic Methods in Programming www.elsevier.com/locate/jlamp A reduction semantics for direct-style asynchronous observables ∗ Philipp Haller a, , Heather Miller b a KTH Royal Institute of Technology, Sweden b Carnegie Mellon University, USA a r t i c l e i n f o a b s t r a c t Article history: Asynchronous programming has gained in importance, not only due to hardware develop- Received 31 January 2016 ments like multi-core processors, but also due to pervasive asynchronicity in client-side Received in revised form 6 March 2019 Web programming and large-scale Web applications. However, asynchronous program- Accepted 6 March 2019 ming is challenging. For example, control-flow management and error handling are much Available online 18 March 2019 more complex in an asynchronous than a synchronous context. Programming with asyn- chronous event streams is especially difficult: expressing asynchronous stream producers and consumers requires explicit state machines in continuation-passing style when using widely-used languages like Java. In order to address this challenge, recent language designs like Google’s Dart introduce asynchronous generators which allow expressing complex asynchronous programs in a familiar blocking style while using efficient non-blocking concurrency control under the hood. However, several issues remain unresolved, including the integration of analogous constructs into statically-typed languages, and the formalization and proof of important correctness properties. This paper presents a design for asynchronous stream generators for Scala, thereby ex- tending previous facilities for asynchronous programming in Scala from tasks/futures to asynchronous streams. -
Model Checking Multithreaded Programs With
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Illinois Digital Environment for Access to Learning and Scholarship Repository Model Checking Multithreaded Programs with Asynchronous Atomic Methods Koushik Sen and Mahesh Viswanathan Department of Computer Science, University of Illinois at Urbana-Champaign. {ksen,vmahesh}@uiuc.edu Abstract. In order to make multithreaded programming manageable, program- mers often follow a design principle where they break the problem into tasks which are then solved asynchronously and concurrently on different threads. This paper investigates the problem of model checking programs that follow this id- iom. We present a programming language SPL that encapsulates this design pat- tern. SPL extends simplified form of sequential Java to which we add the ca- pability of making asynchronous method invocations in addition to the standard synchronous method calls and the ability to execute asynchronous methods in threads atomically and concurrently. Our main result shows that the control state reachability problem for finite SPL programs is decidable. Therefore, such mul- tithreaded programs can be model checked using the counter-example guided abstraction-refinement framework. 1 Introduction Multithreaded programming is often used in software as it leads to reduced latency, improved response times of interactive applications, and more optimal use of process- ing power. Multithreaded programming also allows an application to progress even if one thread is blocked for an I/O operation. However, writing correct programs that use multiple threads is notoriously difficult, especially in the presence of a shared muta- ble memory. Since threads can interleave, there can be unintended interference through concurrent access of shared data and result in software errors due to data race and atom- icity violations. -
Lecture 26: Creational Patterns
Creational Patterns CSCI 4448/5448: Object-Oriented Analysis & Design Lecture 26 — 11/29/2012 © Kenneth M. Anderson, 2012 1 Goals of the Lecture • Cover material from Chapters 20-22 of the Textbook • Lessons from Design Patterns: Factories • Singleton Pattern • Object Pool Pattern • Also discuss • Builder Pattern • Lazy Instantiation © Kenneth M. Anderson, 2012 2 Pattern Classification • The Gang of Four classified patterns in three ways • The behavioral patterns are used to manage variation in behaviors (think Strategy pattern) • The structural patterns are useful to integrate existing code into new object-oriented designs (think Bridge) • The creational patterns are used to create objects • Abstract Factory, Builder, Factory Method, Prototype & Singleton © Kenneth M. Anderson, 2012 3 Factories & Their Role in OO Design • It is important to manage the creation of objects • Code that mixes object creation with the use of objects can become quickly non-cohesive • A system may have to deal with a variety of different contexts • with each context requiring a different set of objects • In design patterns, the context determines which concrete implementations need to be present © Kenneth M. Anderson, 2012 4 Factories & Their Role in OO Design • The code to determine the current context, and thus which objects to instantiate, can become complex • with many different conditional statements • If you mix this type of code with the use of the instantiated objects, your code becomes cluttered • often the use scenarios can happen in a few lines of code • if combined with creational code, the operational code gets buried behind the creational code © Kenneth M. Anderson, 2012 5 Factories provide Cohesion • The use of factories can address these issues • The conditional code can be hidden within them • pass in the parameters associated with the current context • and get back the objects you need for the situation • Then use those objects to get your work done • Factories concern themselves just with creation, letting your code focus on other things © Kenneth M. -
An Enhanced Thread Synchronization Mechanism for Java
SOFTWARE—PRACTICE AND EXPERIENCE Softw. Pract. Exper. 2001; 31:667–695 (DOI: 10.1002/spe.383) An enhanced thread synchronization mechanism for Java Hsin-Ta Chiao and Shyan-Ming Yuan∗,† Department of Computer and Information Science, National Chiao Tung University, 1001 Ta Hsueh Road, Hsinchu 300, Taiwan SUMMARY The thread synchronization mechanism of Java is derived from Hoare’s monitor concept. In the authors’ view, however, it is over simplified and suffers the following four drawbacks. First, it belongs to a category of no-priority monitor, the design of which, as reported in the literature on concurrent programming, is not well rated. Second, it offers only one condition queue. Where more than one long-term synchronization event is required, this restriction both degrades performance and further complicates the ordering problems that a no-priority monitor presents. Third, it lacks the support for building more elaborate scheduling programs. Fourth, during nested monitor invocations, deadlock may occur. In this paper, we first analyze these drawbacks in depth before proceeding to present our own proposal, which is a new monitor-based thread synchronization mechanism that we term EMonitor. This mechanism is implemented solely by Java, thus avoiding the need for any modification to the underlying Java Virtual Machine. A preprocessor is employed to translate the EMonitor syntax into the pure Java codes that invoke the EMonitor class libraries. We conclude with a comparison of the performance of the two monitors and allow the experimental results to demonstrate that, in most cases, replacing the Java version with the EMonitor version for developing concurrent Java objects is perfectly feasible. -
Designing for Performance: Concurrency and Parallelism COS 518: Computer Systems Fall 2015
Designing for Performance: Concurrency and Parallelism COS 518: Computer Systems Fall 2015 Logan Stafman Adapted from slides by Mike Freedman 2 Definitions • Concurrency: – Execution of two or more tasks overlap in time. • Parallelism: – Execution of two or more tasks occurs simultaneous. Concurrency without 3 parallelism? • Parts of tasks interact with other subsystem – Network I/O, Disk I/O, GPU, ... • Other task can be scheduled while first waits on subsystem’s response Concurrency without parrallelism? Source: bjoor.me 5 Scheduling for fairness • On time-sharing system also want to schedule between tasks, even if one not blocking – Otherwise, certain tasks can keep processing – Leads to starvation of other tasks • Preemptive scheduling – Interrupt processing of tasks to process another task (why with tasks and not network packets?) • Many scheduling disciplines – FIFO, Shortest Remaining Time, Strict Priority, Round-Robin Preemptive Scheduling Source: embeddedlinux.org.cn Concurrency with 7 parallelism • Execute code concurrently across CPUs – Clusters – Cores • CPU parallelism different from distributed systems as ready availability to shared memory – Yet to avoid difference between parallelism b/w local and remote cores, many apps just use message passing between both (like HPC’s use of MPI) Symmetric Multiprocessors 8 (SMPs) Non-Uniform Memory Architectures 9 (NUMA) 10 Pros/Cons of NUMA • Pros Applications split between different processors can share memory close to hardware Reduced bus bandwidth usage • Cons Must ensure applications sharing memory are run on processors sharing memory 11 Forms of task parallelism • Processes – Isolated process address space – Higher overhead between switching processes • Threads – Concurrency within process – Shared address space – Three forms • Kernel threads (1:1) : Kernel support, can leverage hardware parallelism • User threads (N:1): Thread library in system runtime, fastest context switching, but cannot benefit from multi- threaded/proc hardware • Hybrid (M:N): Schedule M user threads on N kernel threads. -
Abstract Interface Behavior of an Object-Oriented Language with Futures and Promises
Abstract interface behavior of an object-oriented language with futures and promises 3. September 2007 Erika Abrah´ am´ 1, and Immo Grabe2, and Andreas Gruner¨ 2, and Martin Steffen3 1 Albert-Ludwigs-University Freiburg, Germany 2 Christian-Albrechts-University Kiel, Germany 3 University of Oslo, Norway 1 Motivation How to marry concurrency and object-orientation has been a long-standing issue; see e.g., [2] for an early discussion of different design choices. Recently, the thread-based model of concurrency, prominently represented by languages like Java and C# has been criticized, especially in the context of component-based software development. As the word indicates, components are (software) artifacts intended for composition, i.e., open systems, interacting with a surrounding environment. To compare different concurrency models on a solid mathematical basis, a semantical description of the interface behavior is needed, and this is what we do in this work. We present an open semantics for a core of the Creol language [4,7], an object-oriented, concurrent language, featuring in particular asynchronous method calls and (since recently [5]) future-based concurrency. Futures and promises A future, very generally, represents a result yet to be computed. It acts as a proxy for, or reference to, the delayed result from a given sequential piece of code (e.g., a method or a function body in an object-oriented, resp. a functional setting). As the client of the delayed result can proceed its own execution until it actually needs the result, futures provide a natural, lightweight, and (in a functional setting) transparent mechanism to introduce parallelism into a language. -
N4244: Resumable Lambdas
Document Number: N4244 Date: 2014-10-13 Reply To Christopher Kohlhoff <[email protected]> Resumable Lambdas: A language extension for generators and coroutines 1 Introduction N3977 Resumable Functions and successor describe a language extension for resumable functions, or “stackless” coroutines. The earlier revision, N3858, states that the motivation is: [...] the increasing importance of efficiently handling I/O in its various forms and the complexities programmers are faced with using existing language features and existing libraries. In contrast to “stackful” coroutines, a prime use case for stackless coroutines is in server applications that handle many thousands or millions of clients. This is due to stackless coroutines having lower memory requirements than stackful coroutines, as the latter, like true threads, must have a reserved stack space. Memory cost per byte is dropping over time, but with server applications at this scale, per-machine costs, such as hardware, rack space, network connectivity and administration, are significant. For a given request volume, it is always cheaper if the application can be run on fewer machines. Existing best practice for stackless coroutines in C and C++ uses macros and a switch-based technique similar to Duff’s Device. Examples include the coroutine class included in Boost.Asio1, and Protothreads2. Simon Tatham’s web page3, Coroutines in C, inspired both of these implementations. A typical coroutine using this model looks like this: void operator()(error_code ec, size_t n) { if (!ec) reenter (this) { for (;;) { yield socket_->async_read_some(buffer(*data_), *this); yield async_write(*socket_, buffer(*data_, n), *this); } } } The memory overhead of a stackless coroutine can be as low as a single int. -
Concepts and Paradigms of Object-Oriented Programming Expansion of Oct 400PSLA-89 Keynote Talk Peter Wegner, Brown University
Concepts and Paradigms of Object-Oriented Programming Expansion of Oct 400PSLA-89 Keynote Talk Peter Wegner, Brown University 1. What Is It? 8 1.1. Objects 8 1.2. Classes 10 1.3. Inheritance 11 1.4. Object-Oriented Systems 12 2. What Are Its Goals? 13 2.1. Software Components 13 2.2. Object-Oriented Libraries 14 2.3. Reusability and Capital-intensive Software Technology 16 2.4. Object-Oriented Programming in the Very Large 18 3. What Are Its Origins? 19 3.1. Programming Language Evolution 19 3.2. Programming Language Paradigms 21 4. What Are Its Paradigms? 22 4.1. The State Partitioning Paradigm 23 4.2. State Transition, Communication, and Classification Paradigms 24 4.3. Subparadigms of Object-Based Programming 26 5. What Are Its Design Alternatives? 28 5.1. Objects 28 5.2. Types 33 5.3. Inheritance 36 5.4. Strongly Typed Object-Oriented Languages 43 5.5. Interesting Language Classes 45 5.6. Object-Oriented Concept Hierarchies 46 6. what Are Its Models of Concurrency? 49 6.1. Process Structure 50 6.2. Internal Process Concurrency 55 6.3. Design Alternatives for Synchronization 56 6.4. Asynchronous Messages, Futures, and Promises 57 6.5. Inter-Process Communication 58 6.6. Abstraction, Distribution, and Synchronization Boundaries 59 6.7. Persistence and Transactions 60 7. What Are Its Formal Computational Models? 62 7.1. Automata as Models of Object Behavior 62 7.2. Mathematical Models of Types and Classes 65 7.3. The Essence of Inheritance 74 7.4. Reflection in Object-Oriented Systems 78 8.