CS162 Operating Systems and Systems Programming Lecture 8

Total Page:16

File Type:pdf, Size:1020Kb

CS162 Operating Systems and Systems Programming Lecture 8 Review: Synchronization problem with Threads CS162 • One thread per transaction, each running: Operating Systems and Deposit(acctId, amount) { Systems Programming acct = GetAccount(actId); /* May use disk I/O */ acct->balance += amount; Lecture 8 StoreAccount(acct); /* Involves disk I/O */ } Locks, Semaphores, Monitors, • Unfortunately, shared state can get corrupted: and Thread 1 Thread 2 load r1, acct->balance Quick Intro to Scheduling load r1, acct->balance add r1, amount2 September 23rd, 2015 store r1, acct->balance add r1, amount1 Prof. John Kubiatowicz store r1, acct->balance http://cs162.eecs.Berkeley.edu • Atomic Operation: an operation that always runs to completion or not at all – It is indivisible: it cannot be stopped in the middle and state cannot be modified by someone else in the middle 9/23/15 Kubiatowicz CS162 ©UCB Fall 2015 Lec 8.2 Review: Too Much Milk Solution #3 Review: Too Much Milk: Solution #4 • Here is a possible two-note solution: • Suppose we have some sort of implementation of a Thread A Thread B lock (more in a moment). leave note A; leave note B; – Acquire(&mylock) – wait until lock is free, then grab while (note B) {\\X if (noNote A) {\\Y – Release(&mylock) – Unlock, waking up anyone waiting do nothing; if (noMilk) { } buy milk; – These must be atomic operations – if two threads are if (noMilk) { } waiting for the lock and both see it’s free, only one buy milk; } succeeds to grab the lock } remove note B; remove note A; • Then, our milk problem is easy: • Does this work? Yes. Both can guarantee that: Acquire(&milklock); – It is safe to buy, or if (nomilk) – Other will buy, ok to quit buy milk; Release(&milklock); • At X: – if no note B, safe for A to buy, • Once again, section of code between Acquire() and Release() called a “Critical Section” – otherwise wait to find out what will happen • Of course, you can make this even simpler: suppose • At Y: you are out of ice cream instead of milk – if no note A, safe for B to buy – Skip the test since you always need more ice cream. – Otherwise, A is either buying or waiting for B to quit 9/23/15 Kubiatowicz CS162 ©UCB Fall 2015 Lec 8.3 9/23/15 Kubiatowicz CS162 ©UCB Fall 2015 Lec 8.4 Goals for Today How to implement Locks? • Lock: prevents someone from doing something • Explore several implementations of locks – Lock before entering critical section and • Continue with Synchronization Abstractions before accessing shared data – Semaphores, Monitors, and Condition variables – Unlock when leaving, after accessing shared data – Wait if locked • Very Quick Introduction to scheduling » Important idea: all synchronization involves waiting » Should sleep if waiting for a long time • Atomic Load/Store: get solution like Milk #3 – Looked at this last lecture – Pretty complex and error prone • Hardware Lock instruction – Is this a good idea? – What about putting a task to sleep? » How do you handle the interface between the hardware and scheduler? Note: Some slides and/or pictures in the following are – Complexity? adapted from slides ©2005 Silberschatz, Galvin, and GagneGagne. » Done in the Intel 432 Many slides generated from my lecture notes by Kubiatowicz. » Each feature makes hardware more complex and slow 9/23/15 Kubiatowicz CS162 ©UCB Fall 2015 Lec 8.5 9/23/15 Kubiatowicz CS162 ©UCB Fall 2015 Lec 8.6 Naïve use of Interrupt Enable/Disable Better Implementation of Locks by Disabling Interrupts • How can we build multi-instruction atomic operations? • Key idea: maintain a lock variable and impose mutual – Recall: dispatcher gets control in two ways. exclusion only during operations on that variable » Internal: Thread does something to relinquish the CPU » External: Interrupts cause dispatcher to take CPU – On a uniprocessor, can avoid context-switching by: int value = FREE; » Avoiding internal events (although virtual memory tricky) » Preventing external events by disabling interrupts Acquire() { Release() { • Consequently, naïve Implementation of locks: disable interrupts; disable interrupts; if (value == BUSY) { if (anyone on wait queue) { LockAcquire { disable Ints; } put thread on wait queue; take thread off wait queue LockRelease { enable Ints; } Go to sleep(); Place on ready queue; • Problems with this approach: // Enable interrupts? } else { } else { value = FREE; – Can’t let user do this! Consider following: } LockAcquire(); value = BUSY; enable interrupts; While(TRUE) {;} } } – Real-Time system—no guarantees on timing! enable interrupts; » Critical Sections might be arbitrarily long } – What happens with I/O or other important events? » “Reactor about to meltdown. Help?” 9/23/15 Kubiatowicz CS162 ©UCB Fall 2015 Lec 8.7 9/23/15 Kubiatowicz CS162 ©UCB Fall 2015 Lec 8.8 New Lock Implementation: Discussion Interrupt re-enable in going to sleep • Why do we need to disable interrupts at all? • What about re-enabling ints when going to sleep? – Avoid interruption between checking and setting lock value Acquire() { – Otherwise two threads could think that they both have lock disable interrupts; if (value == BUSY) { Acquire() { Enable Position disable interrupts; Enable Position put thread on wait queue; if (value == BUSY) { Enable Position Go to sleep(); put thread on wait queue; } else { Go to sleep(); value = BUSY; // Enable interrupts? Critical } } else { Section enable interrupts; value = BUSY; } } • Before Putting thread on the wait queue? enable interrupts; – Release can check the queue and not wake up thread } • After putting the thread on the wait queue • Note: unlike previous solution, the critical section – Release puts the thread on the ready queue, but the (inside Acquire()) is very short thread still thinks it needs to go to sleep – User of lock can take as long as they like in their own critical section: doesn’t impact global machine behavior – Misses wakeup and still holds lock (deadlock!) – Critical interrupts taken in time! • Want to put it after sleep(). But – how? 9/23/15 Kubiatowicz CS162 ©UCB Fall 2015 Lec 8.9 9/23/15 Kubiatowicz CS162 ©UCB Fall 2015 Lec 8.10 How to Re-enable After Sleep()? Administrivia • In scheduler, since interrupts are disabled when you • First Checkpoint due this Friday 11:59pm PST call sleep: – Yes this is graded! – Responsibility of the next thread to re-enable ints – Assume design document is high level! – When the sleeping thread wakes up, returns to acquire » You should think of this as a document for a manager (your TA) and re-enables interrupts • Design review Thread A Thread B – High-level discussion of your approach . » What will you modify? . » What algorithm will you use? disable ints » How will things be linked together, etc. sleep » Do not need final design (complete with all semicolons!) sleep return – You will be asked about testing enable ints » Understand testing framework » Are there things you are doing that are not tested by the tests we . give you? . • Do your own work! disable int – Please do not try to find solutions from previous terms sleep – We will be look out for this… sleep return • Basic semaphores work in PintOS! enable ints – However, you will need to implement priority scheduling behavior . both in semaphore and ready queue . 9/23/15 Kubiatowicz CS162 ©UCB Fall 2015 Lec 8.11 9/23/15 Kubiatowicz CS162 ©UCB Fall 2015 Lec 8.12 Atomic Read-Modify-Write instructions Examples of Read-Modify-Write • test&set (&address) { /* most architectures */ • Problems with previous solution: result = M[address]; M[address] = 1; – Can’t give lock implementation to users return result; } – Doesn’t work well on multiprocessor • swap (&address, register) { /* x86 */ temp = M[address]; » Disabling interrupts on all processors requires messages M[address] = register; and would be very time consuming register = temp; } • Alternative: atomic instruction sequences • compare&swap (&address, reg1, reg2) { /* 68000 */ – These instructions read a value from memory and write if (reg1 == M[address]) { M[address] = reg2; a new value atomically return success; } else { – Hardware is responsible for implementing this correctly return failure; » on both uniprocessors (not too hard) } } » and multiprocessors (requires help from cache coherence • load-linked&store conditional(&address) { protocol) /* R4000, alpha */ loop: – Unlike disabling interrupts, can be used on both ll r1, M[address]; uniprocessors and multiprocessors movi r2, 1; /* Can do arbitrary comp */ sc r2, M[address]; beqz r2, loop; } 9/23/15 Kubiatowicz CS162 ©UCB Fall 2015 Lec 8.13 9/23/15 Kubiatowicz CS162 ©UCB Fall 2015 Lec 8.14 Using of Compare&Swap for queues Implementing Locks with test&set • compare&swap (&address, reg1, reg2) { /* 68000 */ if (reg1 == M[address]) { • Another flawed, but simple solution: M[address] = reg2; return success; int value = 0; // Free } else { return failure; Acquire() { } while (test&set(value)); // while busy } } Release() { Here is an atomic add to linked-list function: value = 0; addToQueue(&object) { } do { // repeat until no conflict • Simple explanation: ld r1, M[root] // Get ptr to current head st r1, M[object] // Save link in new object – If lock is free, test&set reads 0 and sets value=1, so } until (compare&swap(&root,r1,object)); lock is now busy. It returns 0 so while exits. } root next next – If lock is busy, test&set reads 1 and sets value=1 (no change). It returns 1, so while loop continues next – When we set value = 0, someone else can get lock New • Busy-Waiting: thread consumes cycles while waiting Object 9/23/15 Kubiatowicz CS162 ©UCB Fall 2015 Lec 8.15 9/23/15 Kubiatowicz CS162 ©UCB Fall 2015
Recommended publications
  • An Introduction to Linux IPC
    An introduction to Linux IPC Michael Kerrisk © 2013 linux.conf.au 2013 http://man7.org/ Canberra, Australia [email protected] 2013-01-30 http://lwn.net/ [email protected] man7 .org 1 Goal ● Limited time! ● Get a flavor of main IPC methods man7 .org 2 Me ● Programming on UNIX & Linux since 1987 ● Linux man-pages maintainer ● http://www.kernel.org/doc/man-pages/ ● Kernel + glibc API ● Author of: Further info: http://man7.org/tlpi/ man7 .org 3 You ● Can read a bit of C ● Have a passing familiarity with common syscalls ● fork(), open(), read(), write() man7 .org 4 There’s a lot of IPC ● Pipes ● Shared memory mappings ● FIFOs ● File vs Anonymous ● Cross-memory attach ● Pseudoterminals ● proc_vm_readv() / proc_vm_writev() ● Sockets ● Signals ● Stream vs Datagram (vs Seq. packet) ● Standard, Realtime ● UNIX vs Internet domain ● Eventfd ● POSIX message queues ● Futexes ● POSIX shared memory ● Record locks ● ● POSIX semaphores File locks ● ● Named, Unnamed Mutexes ● System V message queues ● Condition variables ● System V shared memory ● Barriers ● ● System V semaphores Read-write locks man7 .org 5 It helps to classify ● Pipes ● Shared memory mappings ● FIFOs ● File vs Anonymous ● Cross-memory attach ● Pseudoterminals ● proc_vm_readv() / proc_vm_writev() ● Sockets ● Signals ● Stream vs Datagram (vs Seq. packet) ● Standard, Realtime ● UNIX vs Internet domain ● Eventfd ● POSIX message queues ● Futexes ● POSIX shared memory ● Record locks ● ● POSIX semaphores File locks ● ● Named, Unnamed Mutexes ● System V message queues ● Condition variables ● System V shared memory ● Barriers ● ● System V semaphores Read-write locks man7 .org 6 It helps to classify ● Pipes ● Shared memory mappings ● FIFOs ● File vs Anonymous ● Cross-memoryn attach ● Pseudoterminals tio a ● proc_vm_readv() / proc_vm_writev() ● Sockets ic n ● Signals ● Stream vs Datagram (vs uSeq.
    [Show full text]
  • Synchronization Spinlocks - Semaphores
    CS 4410 Operating Systems Synchronization Spinlocks - Semaphores Summer 2013 Cornell University 1 Today ● How can I synchronize the execution of multiple threads of the same process? ● Example ● Race condition ● Critical-Section Problem ● Spinlocks ● Semaphors ● Usage 2 Problem Context ● Multiple threads of the same process have: ● Private registers and stack memory ● Shared access to the remainder of the process “state” ● Preemptive CPU Scheduling: ● The execution of a thread is interrupted unexpectedly. ● Multiple cores executing multiple threads of the same process. 3 Share Counting ● Mr Skroutz wants to count his $1-bills. ● Initially, he uses one thread that increases a variable bills_counter for every $1-bill. ● Then he thought to accelerate the counting by using two threads and keeping the variable bills_counter shared. 4 Share Counting bills_counter = 0 ● Thread A ● Thread B while (machine_A_has_bills) while (machine_B_has_bills) bills_counter++ bills_counter++ print bills_counter ● What it might go wrong? 5 Share Counting ● Thread A ● Thread B r1 = bills_counter r2 = bills_counter r1 = r1 +1 r2 = r2 +1 bills_counter = r1 bills_counter = r2 ● If bills_counter = 42, what are its possible values after the execution of one A/B loop ? 6 Shared counters ● One possible result: everything works! ● Another possible result: lost update! ● Called a “race condition”. 7 Race conditions ● Def: a timing dependent error involving shared state ● It depends on how threads are scheduled. ● Hard to detect 8 Critical-Section Problem bills_counter = 0 ● Thread A ● Thread B while (my_machine_has_bills) while (my_machine_has_bills) – enter critical section – enter critical section bills_counter++ bills_counter++ – exit critical section – exit critical section print bills_counter 9 Critical-Section Problem ● The solution should ● enter section satisfy: ● critical section ● Mutual exclusion ● exit section ● Progress ● remainder section ● Bounded waiting 10 General Solution ● LOCK ● A process must acquire a lock to enter a critical section.
    [Show full text]
  • Synchronization: Locks
    Last Class: Threads • Thread: a single execution stream within a process • Generalizes the idea of a process • Shared address space within a process • User level threads: user library, no kernel context switches • Kernel level threads: kernel support, parallelism • Same scheduling strategies can be used as for (single-threaded) processes – FCFS, RR, SJF, MLFQ, Lottery... Computer Science CS377: Operating Systems Lecture 5, page 1 Today: Synchronization • Synchronization – Mutual exclusion – Critical sections • Example: Too Much Milk • Locks • Synchronization primitives are required to ensure that only one thread executes in a critical section at a time. Computer Science CS377: Operating Systems Lecture 7, page 2 Recap: Synchronization •What kind of knowledge and mechanisms do we need to get independent processes to communicate and get a consistent view of the world (computer state)? •Example: Too Much Milk Time You Your roommate 3:00 Arrive home 3:05 Look in fridge, no milk 3:10 Leave for grocery store 3:15 Arrive home 3:20 Arrive at grocery store Look in fridge, no milk 3:25 Buy milk Leave for grocery store 3:35 Arrive home, put milk in fridge 3:45 Buy milk 3:50 Arrive home, put up milk 3:50 Oh no! Computer Science CS377: Operating Systems Lecture 7, page 3 Recap: Synchronization Terminology • Synchronization: use of atomic operations to ensure cooperation between threads • Mutual Exclusion: ensure that only one thread does a particular activity at a time and excludes other threads from doing it at that time • Critical Section: piece of code that only one thread can execute at a time • Lock: mechanism to prevent another process from doing something – Lock before entering a critical section, or before accessing shared data.
    [Show full text]
  • Multithreading Design Patterns and Thread-Safe Data Structures
    Lecture 12: Multithreading Design Patterns and Thread-Safe Data Structures Principles of Computer Systems Autumn 2019 Stanford University Computer Science Department Lecturer: Chris Gregg Philip Levis PDF of this presentation 1 Review from Last Week We now have three distinct ways to coordinate between threads: mutex: mutual exclusion (lock), used to enforce critical sections and atomicity condition_variable: way for threads to coordinate and signal when a variable has changed (integrates a lock for the variable) semaphore: a generalization of a lock, where there can be n threads operating in parallel (a lock is a semaphore with n=1) 2 Mutual Exclusion (mutex) A mutex is a simple lock that is shared between threads, used to protect critical regions of code or shared data structures. mutex m; mutex.lock() mutex.unlock() A mutex is often called a lock: the terms are mostly interchangeable When a thread attempts to lock a mutex: Currently unlocked: the thread takes the lock, and continues executing Currently locked: the thread blocks until the lock is released by the current lock- holder, at which point it attempts to take the lock again (and could compete with other waiting threads). Only the current lock-holder is allowed to unlock a mutex Deadlock can occur when threads form a circular wait on mutexes (e.g. dining philosophers) Places we've seen an operating system use mutexes for us: All file system operation (what if two programs try to write at the same time? create the same file?) Process table (what if two programs call fork() at the same time?) 3 lock_guard<mutex> The lock_guard<mutex> is very simple: it obtains the lock in its constructor, and releases the lock in its destructor.
    [Show full text]
  • Semaphores Semaphores (Basic) Semaphore General Vs. Binary
    Concurrent Programming (RIO) 20.1.2012 Lesson 6 Synchronization with HW support • Disable interrupts – Good for short time wait, not good for long time wait – Not good for multiprocessors Semaphores • Interrupts are disabled only in the processor used • Test-and-set instruction (etc) Ch 6 [BenA 06] – Good for short time wait, not good for long time wait – Nor so good in single processor system • May reserve CPU, which is needed by the process holding the lock – Waiting is usually “busy wait” in a loop Semaphores • Good for mutex, not so good for general synchronization – E.g., “wait until process P34 has reached point X” Producer-Consumer Problem – No support for long time wait (in suspended state) Semaphores in C--, Java, • Barrier wait in HW in some multicore architectures – Stop execution until all cores reached barrier_waitinstruction Linux, Minix – No busy wait, because execution pipeline just stops – Not to be confused with barrier_wait thread operation 20.1.2012 Copyright Teemu Kerola 2012 1 20.1.2012 Copyright Teemu Kerola 2012 2 Semaphores (Basic) Semaphore semaphore S public create initial value integer value semafori public P(S) S.value private S.V Edsger W. Dijkstra V(S) http://en.wikipedia.org/wiki/THE_operating_system public private S.list S.L • Dijkstra, 1965, THE operating system queue of waiting processes • Protected variable, abstract data type (object) • P(S) WAIT(S), Down(S) – Allows for concurrency solutions if used properly – If value > 0, deduct 1 and proceed • Atomic operations – o/w, wait suspended in list (queue?) until released – Create (SemaName, InitValue) – P, down, wait, take, pend, • V(S) SIGNAL(S), Up(S) passeren, proberen, try, prolaad, try to decrease – If someone in queue, release one (first?) of them – V, up, signal, release, post, – o/w, increase value by one vrijgeven, verlagen, verhoog, increase 20.1.2012 Copyright Teemu Kerola 2012 3 20.1.2012 Copyright Teemu Kerola 2012 4 General vs.
    [Show full text]
  • CSC 553 Operating Systems Multiple Processes
    CSC 553 Operating Systems Lecture 4 - Concurrency: Mutual Exclusion and Synchronization Multiple Processes • Operating System design is concerned with the management of processes and threads: • Multiprogramming • Multiprocessing • Distributed Processing Concurrency Arises in Three Different Contexts: • Multiple Applications – invented to allow processing time to be shared among active applications • Structured Applications – extension of modular design and structured programming • Operating System Structure – OS themselves implemented as a set of processes or threads Key Terms Related to Concurrency Principles of Concurrency • Interleaving and overlapping • can be viewed as examples of concurrent processing • both present the same problems • Uniprocessor – the relative speed of execution of processes cannot be predicted • depends on activities of other processes • the way the OS handles interrupts • scheduling policies of the OS Difficulties of Concurrency • Sharing of global resources • Difficult for the OS to manage the allocation of resources optimally • Difficult to locate programming errors as results are not deterministic and reproducible Race Condition • Occurs when multiple processes or threads read and write data items • The final result depends on the order of execution – the “loser” of the race is the process that updates last and will determine the final value of the variable Operating System Concerns • Design and management issues raised by the existence of concurrency: • The OS must: – be able to keep track of various processes
    [Show full text]
  • INF4140 - Models of Concurrency Locks & Barriers, Lecture 2
    Locks & barriers INF4140 - Models of concurrency Locks & barriers, lecture 2 Høsten 2015 31. 08. 2015 2 / 46 Practical Stuff Mandatory assignment 1 (“oblig”) Deadline: Friday September 25 at 18.00 Online delivery (Devilry): https://devilry.ifi.uio.no 3 / 46 Introduction Central to the course are general mechanisms and issues related to parallel programs Previously: await language and a simple version of the producer/consumer example Today Entry- and exit protocols to critical sections Protect reading and writing to shared variables Barriers Iterative algorithms: Processes must synchronize between each iteration Coordination using flags 4 / 46 Remember: await-example: Producer/Consumer 1 2 i n t buf, p := 0; c := 0; 3 4 process Producer { process Consumer { 5 i n t a [N ] ; . i n t b [N ] ; . 6 w h i l e ( p < N) { w h i l e ( c < N) { 7 < await ( p = c ) ; > < await ( p > c ) ; > 8 buf:=a[p]; b[c]:=buf; 9 p:=p+1; c:=c+1; 10 }} 11 }} Invariants An invariant holds in all states in all histories of the program. global invariant: c ≤ p ≤ c + 1 local (in the producer): 0 ≤ p ≤ N 5 / 46 Critical section Fundamental concept for concurrency Critical section: part of a program that is/needs to be “protected” against interference by other processes Execution under mutual exclusion Related to “atomicity” Main question today: How can we implement critical sections / conditional critical sections? Various solutions and properties/guarantees Using locks and low-level operations SW-only solutions? HW or OS support? Active waiting (later semaphores and passive waiting) 6 / 46 Access to Critical Section (CS) Several processes compete for access to a shared resource Only one process can have access at a time: “mutual exclusion” (mutex) Possible examples: Execution of bank transactions Access to a printer or other resources ..
    [Show full text]
  • Lesson-9: P and V SEMAPHORES
    Inter-Process Communication and Synchronization of Processes, Threads and Tasks: Lesson-9: P and V SEMAPHORES Chapter-9 L9: "Embedded Systems - Architecture, Programming and Design", 2015 Raj Kamal, Publs.: McGraw-Hill Education 1 1. P and V SEMAPHORES Chapter-9 L9: "Embedded Systems - Architecture, Programming and Design", 2015 Raj Kamal, Publs.: McGraw-Hill Education 2 P and V semaphores • An efficient synchronisation mechanism • POSIX 1003.1.b, an IEEE standard. • POSIX─ for portable OS interfaces in Unix. • P and V semaphore ─ represents an integer in place of binary or unsigned integers Chapter-9 L9: "Embedded Systems - Architecture, Programming and Design", 2015 Raj Kamal, Publs.: McGraw-Hill Education 3 P and V semaphore Variables • The semaphore, apart from initialization, is accessed only through two standard atomic operations─ P and V • P (for wait operation)─ derived from a Dutch word ‘Proberen’, which means 'to test'. • V (for signal passing operation)─ derived from the word 'Verhogen' which means 'to increment'. Chapter-9 L9: "Embedded Systems - Architecture, Programming and Design", 2015 Raj Kamal, Publs.: McGraw-Hill Education 4 P and V Functions for Semaphore P semaphore function signals that the task requires a resource and if not available waits for it. V semaphore function signals which the task passes to the OS that the resource is now free for the other users. Chapter-9 L9: "Embedded Systems - Architecture, Programming and Design", 2015 Raj Kamal, Publs.: McGraw-Hill Education 5 P Function─ P (&Sem1) 1. /* Decrease the semaphore variable*/ sem_1 = sem_1 1; 2. /* If sem_1 is less than 0, send a message to OS by calling a function waitCallToOS.
    [Show full text]
  • Shared Memory Programming: Threads, Semaphores, and Monitors
    Shared Memory Programming: Threads, Semaphores, and Monitors CPS343 Parallel and High Performance Computing Spring 2020 CPS343 (Parallel and HPC) Shared Memory Programming: Threads, Semaphores, and MonitorsSpring 2020 1 / 47 Outline 1 Processes The Notion and Importance of Processes Process Creation and Termination Inter-process Communication and Coordination 2 Threads Need for Light Weight Processes Differences between Threads and Processes Implementing Threads 3 Mutual Exclusion and Semaphores Critical Sections Monitors CPS343 (Parallel and HPC) Shared Memory Programming: Threads, Semaphores, and MonitorsSpring 2020 2 / 47 Outline 1 Processes The Notion and Importance of Processes Process Creation and Termination Inter-process Communication and Coordination 2 Threads Need for Light Weight Processes Differences between Threads and Processes Implementing Threads 3 Mutual Exclusion and Semaphores Critical Sections Monitors CPS343 (Parallel and HPC) Shared Memory Programming: Threads, Semaphores, and MonitorsSpring 2020 3 / 47 What is a process? A process is a program in execution. At any given time, the status of a process includes: The code that it is executing (e.g. its text). Its data: The values of its static variables. The contents of its stack { which contains its local variables and procedure call/return history. The contents of the various CPU registers { particularly the program counter, which indicates what instruction the process is to execute next. Its state { is it currently able to continue execution, or must further execution wait until some event occurs (e.g. the completion of an IO operation it has requested)? CPS343 (Parallel and HPC) Shared Memory Programming: Threads, Semaphores, and MonitorsSpring 2020 4 / 47 What is a process? The chief task of an operating system is to manage a set of processes.
    [Show full text]
  • Software Transactional Memory with Interactions 3
    Software Transactional Memory with Interactions⋆ Extended Version Marino Miculan1 and Marco Peressotti2 1 University of Udine [email protected] 2 University of Southern Denmark [email protected] Abstract Software Transactional memory (STM) is an emerging ab- straction for concurrent programming alternative to lock-based synchron- izations. Most STM models admit only isolated transactions, which are not adequate in multithreaded programming where transactions need to interact via shared data before committing. To overcome this limitation, in this paper we present Open Transactional Memory (OTM), a pro- gramming abstraction supporting safe, data-driven interactions between composable memory transactions. This is achieved by relaxing isolation between transactions, still ensuring atomicity. This model allows for loosely-coupled interactions since transaction merging is driven only by accesses to shared data, with no need to specify participants beforehand. 1 Introduction Modern multicore architectures have emphasized the importance of abstractions supporting correct and scalable multi-threaded programming. In this model, threads can collaborate by interacting on shared data structures, such as tables, message queues, buffers, etc., whose access must be regulated to avoid inconsist- encies. Traditional lock-based mechanisms (like semaphores and monitors) are notoriously difficult and error-prone, as they easily lead to deadlocks, race con- ditions and priority inversions; moreover, they are not composable and hinder parallelism, thus reducing efficiency and scalability. Transactional memory (TM) has emerged as a promising abstraction to replace locks [5, 20]. The basic idea is to mark blocks of code as atomic; then, execution of each block will appear either as if it was executed sequentially and instantaneously at some unique point in time, or, if aborted, as if it did not execute at all.
    [Show full text]
  • Semaphores Cosc 450: Programming Paradigms 06
    CoSc 450: Programming Paradigms 06 Semaphores CoSc 450: Programming Paradigms 06 Semaphore Purpose: To prevent inefficient spin lock of the await statement. Idea: Instead of spinning, the process calls a method that puts it in a special queue of PCBs, the “blocked on semaphore” queue. 71447_CH08_Chapter08.qxd 1/28/09 12:27 AM Page 422 422 Chapter 8 Process Management The PCB contains additional information to help the operating system schedule the CPU. An example is a unique process identification number assigned by the system, labeled Process ID in Figure 8.18, that serves to reference the process. Sup- pose a user wants to terminate a process before it completes execution normally, and he knows the ID number is 782. He could issue a KILL(782) command that would cause the operating system to search through the queue of PCBs, find the PCB with ID 782, remove it from the queue, and deallocate it. Another example of information stored in the PCB is a record of the total amount of CPU time used so far by the suspended process. If the CPU becomes available and the operating system must decide which of several suspended processes gets the CPU, it can use the recorded time to make a fairFigure decision. 8.19 As a job progresses through the system toward completion, it passes through several states, as Figure 8.19 shows. The figure is in the form of a state transition diagram and is another example of a finite state machine. Each transition is labeled with the event that causes the change of state.
    [Show full text]
  • Low Contention Semaphores and Ready Lists
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Purdue E-Pubs Purdue University Purdue e-Pubs Department of Computer Science Technical Reports Department of Computer Science 1980 Low Contention Semaphores and Ready Lists Peter J. Denning T. Don Dennis Jeffrey Brumfield Report Number: 80-332 Denning, Peter J.; Dennis, T. Don; and Brumfield, Jeffrey, "Low Contention Semaphores and Ready Lists" (1980). Department of Computer Science Technical Reports. Paper 261. https://docs.lib.purdue.edu/cstech/261 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. LOW CONTENTION SEMAPHORES AND READY LISTS Peter J. Denning T. Don Dennis Jeffrey Brumfield Computer Sciences Department Purdue University W. Lafayette, IN 47907 CSD-TR-332 February 1980 Revised June 1980 Abstract. A method for reducing sema­ phore and ready list contention in mul­ tiprocessor operating systems is described. Its correctness is esta­ blished. Its performance is compared with conventional implementations. A method of implementing the ready list with a ring network is proposed and evaluated. This work was supported in part by NSF Grant MCS78­ 01729 at Purdue University. June 18, 1980 - 2 - THE PROBLEM Modern operating systems implement semaphores for synchron- izing multiple concurrent processes. Unless the primitive opera- tions for starting, stopping, and scheduling processes and for manipulating semaphores are well supported by the hardware, con- text switching and interprocess signaling can be major overheads [1]. To avoid these overheads, many operating systems use fast, but unreliable ad hoc methods for synchronizing processes.
    [Show full text]