4. Process Synchronization

4. Process Synchronization

Lecture Notes for CS347: Operating Systems Mythili Vutukuru, Department of Computer Science and Engineering, IIT Bombay 4. Process Synchronization 4.1 Race conditions and Locks • Multiprogramming and concurrency bring in the problem of race conditions, where multiple processes executing concurrently on shared data may leave the shared data in an undesirable, inconsistent state due to concurrent execution. Note that race conditions can happen even on a single processor system, if processes are context switched out by the scheduler, or are inter- rupted otherwise, while updating shared data structures. • Consider a simple example of two threads of a process incrementing a shared variable. Now, if the increments happen in parallel, it is possible that the threads will overwrite each other’s result, and the counter will not be incremented twice as expected. That is, a line of code that increments a variable is not atomic, and can be executed concurrently by different threads. • Pieces of code that must be accessed in a mutually exclusive atomic manner by the contending threads are referred to as critical sections. Critical sections must be protected with locks to guarantee the property of mutual exclusion. The code to update a shared counter is a simple example of a critical section. Code that adds a new node to a linked list is another example. A critical section performs a certain operation on a shared data structure, that may temporar- ily leave the data structure in an inconsistent state in the middle of the operation. Therefore, in order to maintain consistency and preserve the invariants of shared data structures, critical sections must always execute in a mutually exclusive fashion. Locks guarantee that only one contending thread resides in the critical section at any time, ensuring that the shared data is not left in an inconsistent state. • There have been several solutions to develop software-based locks, but they are cumbersome, and do not work quite well on multiprocessor systems. Note that having a simple lock vari- able and manipulating it will not work, because the race condition will now occur when up- dating the lock. A better alternative is to use atomic instructions that are provided by most CPUs. One such instruction is the xchg instruction (the name may differ across architectures). xchg swaps the contents of two variables atomically, and returns the old value. For example, xchg(&lock, 1) atomically sets the variable lock to 1, and returns the old value of lock. Such atomic instructions can be used to implement lock and unlock functions. For example, the code while(xchg(&lock, 1) != 0); acquires a lock. The while loop returns 1 and continues to run as long as someone else has held a lock, causing xchg to return 1. Once the lock is released, xchg acquires it atomically and terminates the while loop. Similarly, a lock can be released with xchg(&lock,0). • Threads that want to access a critical section must try to acquire the lock, and proceed to the critical section only when the lock has been acquired. Now, what should the OS do to a thread 1 when the requested lock is held by someone else? There are two options: the thread could wait busily, constantly polling to check if the lock is available. Or the thread could be made to give up its CPU, go to sleep (i.e., block), and be scheduled again when the lock is available. The former way of locking is usually referred to as a spinlock, while the latter is called a regular lock or a mutex. (Note that while the term mutex can refer to a generic lock that spins or sleeps, it usually means a lock that sleeps.) The simple locking code based on the xchg instruction above spins busily in the while loop, and hence is a simple implementation of a spin lock. A spinlock usually consumes lot more CPU resources than a mutex, and must be used only if the benefit of busy waiting outweighs the cost of a context switch. • Most operating systems provide one or more mechanisms for locking. The pthreads API has support for locks, both spinlocks and regular mutexes. Threads create a shared lock variable, and call the lock/unlock functions of the API while accessing critical sections. The pthreads locks are typically implemented using hardware atomic instructions. Some operating systems also implement hybrids between a spinlock and a mutex: the thread spins busily for the lock for a short period of time, or when it expects the thread holding the lock to release it soon, and blocks if the lock isn’t released in a reasonable time. • In addition to implementing APIs for user code, operating systems also implement mechanisms for safely accessing critical sections of kernel code. That is, when a kernel data structure is being manipulated, the OS should ensure that the update to the data structure happens in an atomic manner. On single processor systems, it is enough to disable interrupts when access- ing critical sections, and re-enable them after the update is done. Why? Because, unlike user threads that may get context switched out by the kernel, nothing other than interrupts can inter- rupt the execution of the OS. On multiprocessor systems, however, disabling interrupts alone may not work, because some other process can move to kernel mode on another CPU core and concurrently access kernel code. Note that all processes share the kernel code and context, so multiple processes in kernel mode are in fact executing on the same kernel code and data. In order to prevent concurrent access from another process in kernel mode on another CPU, the Linux kernel primary uses spinlocks to protect its data structures from concurrent updates. Spinlocks are held for small periods of time, so that other kernel threads (on other cores) can wait busily for the lock to be released. • Any kernel code that holds a spinlock must not do anything that causes the process to block: the code that runs next may require this lock, which will never to released, leading to a deadlock. Therefore, spinlocks must be used very carefully. Operating systems also disable interrupts (and any type of preemption or context switches) on the core on which the spinlock is held (not on other cores) while the spinlock is held, to avoid other interrupt handlers from deadlocking over the held lock. (Note that user processes that hold locks can still be preempted by interrupts, as the interrupt handlers execute kernel code and will never deadlock over locks in user code.) • Threading libraries can provide many other fancy locks, in addition to the simple spinlocks and mutexes. Some examples are given below. However, note that coding using these locks can be reimplemented using the simple lock as a building block, so these variants are more for convenience to programmers than for correctness. 2 – When a thread holds a lock, and calls a function that requests the lock again, a deadlock occurs. One way to solve this problem is to have recursive locks. A recursive lock can be locked multiple times by the same thread, and every lock operation should see a corresponding call to unlock. A recursive lock is fully unlocked when all lock operations are matched with unlock operations. Note that another thread cannot still lock while the lock is held by one thread. Recursive locks are useful to avoid deadlocks when a lock may be called multiple times by the same thread. But such locks must be used carefully as they no longer guarantee mutual exclusion. APIs like pthreads provide recursive locks that can be used in user code. – Observe that writing to shared data concurrently leads to race conditions, while multiple threads reading shared data is perfectly fine. Therefore, read-write locks can be used to make this distinction. With these locks, there are separate lock functions for reading and writing threads. Multiple threads can be simultaneously granted this lock if all of them request the lock in read mode. However, when a thread tries to lock in write mode, it must wait for other readers or writer to release the lock before it can acquire it. Read-write locks make it easier to allow concurrent reads. Again, most APIs that provide locking functions (mutexes or spinlocks) also provide read/write versions of the locks. • In addition to mutual exclusion, two other properties are also desirable when several processes must access a critical section. First is progress, or freedom from deadlocks. That is, if there are multiple processes wanting to access a critical section, one of them should be given access at any point of time. The other property is bounded wait, or freedom from starvation. Any single process should not be made to wait indefinitely for access to the critical section. Note that the properties of mutual exclusion, progress, and bounded wait are all independent. For example, one can have a solution (trivially: always give access to only one process) that guarantees mutual exclusion and progress, at the risk of starving one or more processes. Locks are generally designed to guarantee mutual exclusion always, but bad implementations may sometimes lead to deadlocks or starvation. • Below are several guidelines for programmers when using locks. – Every piece of shared data (between processes or threads) must be protected by a lock. Note that the kernel only provides mechanisms for locking, but it is up to the user to ensure that the lock is correctly acquired before updating shared data. The kernel does not in any way detect or prevent threads from updating shared data without using a lock, or from holding locks unnecessarily when no shared data is being updated.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us