Blocking and Non-Blocking Process Synchronization: Analysis of Implementation
Total Page:16
File Type:pdf, Size:1020Kb
Blocking and Non-Blocking Process Synchronization: Analysis of Implementation Vladislav Nazaruk, Riga Technical University, Pavel Rusakov, Riga Technical University Abstract — In computer programs with multiple processes, in- — to analyze operation logic of widespread process ter-process communication is of high importance. One of its main synchronization mechanisms, aspects is process synchronization, which can be divided into two — to analyze support of widely used process syn- classes: blocking and non-blocking. Blocking synchronization is simpler and mostly used; however, non-blocking synchronization chronization mechanisms in different modern ob- can avoid some negative effects. In this paper, there is discussed ject-oriented programming languages, software logic of widespread process synchronization mechanisms and is and hardware platforms, analyzed support of these mechanisms in different platforms. — to compare with each other blocking and non- blocking process synchronization. Keywords — concurrent computing, process synchronization, blocking and non-blocking algorithms II. RELATED WORKS Process synchronization is common to all operating sys- I. INTRODUCTION tems. Therefore, many books dedicated to operating systems, Nowadays the use of parallel and concurrent computing for example [1], describe basic principles of process synchro- rapidly increases. As a stimulating factor, there can be men- nization and synchronization mechanisms commonly used in tioned the development and expansion of computers with operating systems. These works describe mostly blocking syn- multi-core processors (as well as multi-processor systems). In chronization mechanisms. In [2], there is described and stud- such computers, at any time there can be executed several ied another class of process synchronization mechanisms — processes (threads). Parallel computing implies that one com- non-blocking synchronization. puting task (or its part) is divided into several equal subtasks, Before trying to implement a specific synchronization which will be executed simultaneously by different processors mechanism on a new software or hardware platform, the plat- or processor cores. Therefore, the goal of parallel computing is form should be examined to discover whether it in principle mainly to decrease the computation time. Concurrent comput- supports all instructions or primitives needed to implement the ing also implies a separation of a computing task into several mechanism. It is true that all commonly used synchronization threads, but with an aim of structuring a program in an appro- mechanisms nowadays can be implemented on a hardware priate way for a task, in order to dynamically reallocate proc- platform which includes any modern central processing unit essor time for other threads when one thread is idle. (CPU). However, when speaking about a graphic processing Both parallel and concurrent computing are closely tied to- unit (GPU) as such hardware platform, as it is a relatively new gether; and they both can be used to speed up a computation. platform, the possibility of implementing synchronization Both in parallel and concurrent computing there is need for mechanisms should be still considered. The paper [3] shows inter-process communication (IPC) — a way how different that the most demanding class of synchronization mechanisms threads of one process can exchange data with one another. (wait-free non-blocking synchronization) could be imple- There exist different techniques for inter-process communica- mented on GPUs even without the need of strong synchroniza- tion, for example message passing, synchronization, shared tion primitives in hardware. In contrast, in the corresponding memory, remote procedure calls (RPC). Synchronization chapter of this paper, there is discussed possible support of techniques by themselves form a large class of mechanisms; such hardware primitives by GPUs. and at a first approximation they can be divided into two classes: data synchronization mechanisms and process syn- III. BLOCKING PROCESS SYNCHRONIZATION chronization mechanisms. The aim of data synchronization is There exist many different process synchronization mecha- to assure that all copies of specific data are coherent (up-to- nisms. Two base classes of these mechanisms are blocking date); the aim of process synchronization is to assure a spe- and non-blocking synchronization mechanisms. (The differ- cific coherence of execution of actions between several proc- ences between these two classes will be described further in esses (or threads). this paper.) Most commonly used are blocking synchroniza- This paper will discuss algorithms used for synchronizing tion mechanisms or locks; and most widespread blocking syn- processes. The goal of this paper is to analyze exactly process chronization mechanisms are the following: semaphores, synchronization mechanisms in terms of their operation logic monitors, condition variables, events, barriers, rendezvous, and their support in modern software and hardware platforms. etc. These synchronization mechanisms are considered next in The tasks of the research are the following: this chapter. A. Overview of blocking synchronization a) sleep locks (most common) — a process is blocked The aim of locks is to control access to a specific shared re- when it attempts to access a busy resource, source (for example, to a shared memory area) in such an ab- b) spin locks — after an attempt to access a busy re- stract way that when a process holds a lock, it can operate with source, a process is notified that the resource is the shared resource to such a degree that is allowed by a lock. busy, and is not blocked; then it usually attempts to For example, when a process holds a lock which grants full access the resource over again [1]. access to a shared resource, the process has a right to do any- The aim of monitors [1] is similar to the aim of locks; how- thing with the resource. On the other side, if a process holds a ever, monitors are on a more abstract level than locks — lock which grants read-only access to a shared resource, the monitor is an object which by itself guarantees that its meth- process has a right only to read from the resource. As to a ods are always executed with mutual exclusions (i. e., at the process that does not hold a lock, it is forbidden for it to do same time there can be executed only one method of this ob- any operations with the corresponding shared resource. ject). Although for internal implementation of monitors, bi- Sharing a resource (and thus synchronizing process access nary semaphores are usually used [1], when using monitors, to it) with locks can cause several undesirable situations such there is no need for programmer to understand the inner work- as [4]: ing of them. 1) Insufficient throughput in case of a heavily con- Monitors can be extended in order to allow threads to ac- tended lock — when a lock is trying to be acquired cess a shared resource only when a specific condition (pro- by processes frequently while it is held by some vided by a corresponding thread) is met — in this case, they other process. In this case much of processor re- are called condition variables. source is spent in vain. The aim of events is to suspend (block) all threads which 2) If a process holding a lock is paused, no other explicitly asked for that, until a specific condition is met (this process can take this lock, and therefore all other happens when another thread notifies the corresponding event processes requiring the lock cannot make any pro- object). gress. A situation when a higher-priority process is Barriers are synchronization mechanisms for a group of waiting for a lock held by a lower-priority process threads, when it is needed to block execution of each thread for a long period of time is called a priority inver- from the group at a specific location in the executable code sion. (―barrier‖) of the corresponding thread. Just after all threads 3) When there exists a set of processes where each from the group are blocked (―reach a barrier‖), barrier auto- process is waiting for a lock held by another proc- matically unblocks all these threads — allowing them to re- ess in this set, there occurs a deadlock. In this sume working together at the same time. situation execution of all processes from the set is Rendezvous are process synchronization mechanisms based indefinitely postponed — and without special ac- on a client/server model for interaction: one thread (a server) tions, there will never be a progress of any process declares a set of services (methods with specific entries) being of the set. offered to other threads (clients). To request a rendezvous, a calling thread must make an entry call on an entry of another B. Classification of locks thread. The calling thread becomes blocked waiting for the Locks can be classified in several ways, including the fol- rendezvous to take place — i. e., until the called thread ac- lowing: cepts this entry call. When the accepting thread ends the ren- 1) by possible mode of access to a shared resource: dezvous, both threads are freed to continue their execution [6]. a) semaphores (binary semaphores, mutexes, critical C. Support of blocking synchronization in imperative program- sections and counting semaphores) — it is sup- ming languages posed that all threads have an exclusive (read/write) access to a resource [1], In Table 1, there are given schematic results of comparison b) readers-writers locks — threads are differentiated of some widespread