Semaphores and Other Wait-And-Signal Mechanisms

Total Page:16

File Type:pdf, Size:1020Kb

Semaphores and Other Wait-And-Signal Mechanisms Semaphores and other Wait-and-Signal mechanisms Carsten Griwodz University of Oslo (including slides by Otto Anshus and Kai Li) Critical Regions Four conditions to provide mutual exclusion 1. No two threads simultaneously in critical region 2. No assumptions made about speeds or numbers of CPUs 3. No thread running outside its critical region may block another thread 4. No thread must wait forever to enter its critical region Critical Regions Mutual exclusion using critical regions Recall Processes and Threads • Process • Thread – Address space – Program counter – Program text and data – Registers – Open files – Stack – Child process IDs – Alarms – Signal handlers – Accounting information • Implemented in kernel or in user space • Implemented in kernel • Threads are the scheduled entities Producer-Consumer Problem • Main problem description – Two threads – Different actions in the critical region – The consumer can not enter the CR more often than the producer • Two sub-problems – Unbounded PCP: the producer can enter the CR as often as it wants – Bounded PCP: the producer can enter the CR only N times more often than the consumer Unbounded PCP Q PUT (msg) GET (buf) Rules for the queue Q: •No Get when empty Consumer Producer •Q shared, so must have mutex between Put and Get Recall Mutexes • Can be acquired and released – Only one thread can hold one mutex at a time – A second thread trying to acquire must wait • Mutexes – Can be implemented using busy waiting – Simpler with advanced atomic operations • Disable interrupts, TSL, XCHG, … – Still many approaches using busy waiting – Better implemented using system calls block & unblock Bounded PCP out B in Rules for the buffer B: Capacity: N •No Get when empty •No Put when full GET (buf) PUT (msg) •B shared, so must have mutex between Put and Get Producer Consumer Mutex Solution Put(msg) { Get(msg) { acquire(mutex); acquire(mutex); <put> while(empty) { release(mutex); release(mutex); Unbounded PCP } acquire(mutex); } <get> release(mutex); Busy waiting } Put(msg) { Get(msg) { acquire(mutex); acquire(mutex); while(full) { while(empty) { release(mutex); release(mutex); Bounded PCP acquire(mutex); acquire(mutex); } } <put> <get> release(mutex); release(mutex); } } Two Kinds of Synchronization LOCK is initially OPEN Acquire (id); Acquire (id); MUTEX <CR> <CR> Acquire will let first Release (id); Release (id); caller through, and then block next until Release LOCK is initially CLOSED CONDITION SYNCHRONIZATION Acquire (id); Release (id); Acquire will block first caller until Release SIGNAL Sleep and Wakeup / Signal and Wait • Wait (cond) • Signal (cond) – Insert(caller, – Unblock first in cond_queue) cond_queue, or just – Block this thread return if empty No counting, unused signals are ignored Wait is atomic Unbounded PCP using Signal and Wait Q Rules for the queue Q: •No Get when empty GET (buf): PUT (msg): •Q shared, so must have mutex between Put and Get Producer Consumer while(1) { while(1) { <process> if(empty) acquire(mutex); wait(cond); <insert> acquire(mutex); release(mutex); <remove> signal(cond); release(mutex); } <process> } Unbounded PCP using Signal and Wait Q GET (buf): PUT (msg): Producer Consumer while(1) { while(1) { <process> if(empty) acquire(mutex); wait(cond); <insert> acquire(mutex); release(mutex); <remove> signal(cond); release(mutex); } <process> } Lost signal Unbounded PCP using Signal and Wait Q GET (buf): PUT (msg): Producer Consumer while(1) { while(1) { <process> acquire(mutex); acquire(mutex); while(empty) { <insert> wait(cond); signal(cond); release(mutex); release(mutex); acquire(mutex); } Producer can’t enter } <remove> release(mutex); <process> } Unbounded PCP using Signal and Wait Q GET (buf): PUT (msg): Producer Consumer while(1) { while(1) { <process> acquire(mutex); acquire(mutex); while(empty) { <insert> release(mutex); signal(cond); wait(cond); release(mutex); acquire(mutex); } } <remove> Lost signal release(mutex); <process> } Threads wait for … • Access to a critical region – Mutex – Semaphore • A condition to be fulfilled – Condition variable – Barrier – Semaphore Semaphores Semaphores (Dijkstra, 1965) prolaag verhoog • Down or Wait or “P” • Up or Signal or “V” – Atomic – Atomic – Decrement semaphore – Increment semaphore by 1 value by 1 – Wake up a waiting thread if – Block if not positive any P(s) { V(s) { if (--s < 0) if (++s <= 0) Block(s); Unblock(s); } } Can get negative s: counts number of waiting threads s is NOT accessible through other means than calling P and V Semaphores w/Busy Wait P(s): V(s): while (s <= 0) {}; s++; s--; ATOMIC • Starvation possible? • Does it matter in practice? The Structure of a Semaphore s integer Threads waiting to get return after calling P (s) when s was <=0 sem_wait_queue V (s) P (s) Unblock Block (FIFO is fair) +1 -1 •Atomic: Disable interrupts •Atomic: P() and V() as System calls •Atomic: Entry-Exit protocols Using Semaphores “The Signal” “The Mutex” s := 0; s := 1; A B P (s); V (s); P (s); P (s); <CR> <CR> V(s); V(s); A blocks until B says V One thread gets in, next blocks until V is executed s := 11; “The Team” NB: remember to set the P(s); P(s); <max 11> <max 11> initial semaphore value! V(s); V(s); Up to 11 threads can pass P, the ninth will block until V is said by one of the eight already in there Simple to debug? A x := 0; B y := 0; P (x); P (y); ….. ….. V (y); V (x); What will happen? THEY ARE FOREVER WAITING FOR EACH OTHERS SIGNAL (“No milk”) Examples Unbounded PCP using Semaphores Q Rules for the queue Q: One semaphore for each condition we must wait •No Get when empty for to become TRUE: GET (buf): PUT (msg): •Q shared, so must have •Q empty: nonempty:=0; mutex between Put and Producer Consumer Get •Q mutex: mutex:=1; PUT (msg): GET (buf): •Is Mutex needed when only 1 P and 1 C? P(mutex); P(nonempty); •PUT at one end, GET at other end <insert> P(mutex); V(mutex); <remove> V(nonempty); V(mutex); Bounded PCP using Semaphores Rules for the buffer B: One semaphore for each out condition we must wait B •No Get when empty for to become TRUE: in •No Put when full •B empty: nonempty:=0; Capacity: N •B shared, so must have •B full: nonfull:=N mutex between Put and •B mutex: mutex:=1; GET (buf): Get PUT (msg): Producer Consumer PUT (msg): GET (buf): •PUT at one end, GET at other end P(nonfull); P(nonempty); P(mutex); P(mutex); <insert> <remove> V(mutex); V(mutex); V(nonempty); V(nonfull); Dining Philosophers Problem • Five philosopher • Five dishes • Five forks • But a philosopher needs two forks for eating • Usually the philosophers think, when they are hungry the try to eat • How to prevent all philosophers from starving Dining Philosophers s state •Each: 2 forks to eat • Free •5 philosophers: 10 forks to let all eat concurrently i+1 i i+1 •5 forks: 2 can eat concurrently i Mutex on whole table: P(mutex); T •1 can eat at a time eat; i V(mutex); Get L; Get R; P(s(i)); Ti •Deadlock possible P(s(i+1)); eat; V(s(i+1)); S(i) = 1 initially V(s(i)); Ti Get L; Get R if free else Put L; •Starvation possible Dining Philosophers To avoid starvation they could look after each other: s state •Entry: If L and R is not eating we can •Thinking •Eating •Exit: If L (R) wants to eat and L.L (R.R) is not i+1 eating we start him eating •Want i i+1 S() = 0 initially i P(mutex); state(i):=Want; if (state(i-1) !=Eating AND state(i+1) != Eating) Ti {/*Safe to eat*/ While (1) { state(i):=Eating; V(s(i)); /*Because */ } <think> V(mutex); ENTRY; P(s(i)); /* Init was 0!! <eat> We or neighbor must say V(i) to us!*/ EXIT; } P(mutex); state(i):=Thinking; if (state(i-1)=Want AND state(i-2) !=Eating) { state(i-1):=Eating; V(s(i-1)); /*Start Left neighbor*/ } /*Analogue for Right neighbor*/ V(mutex); Dining Philosophers s S(i) = 1 initially i+1 T1, T2, T3, T4: i i+1 P(s(i)): i •Remove the danger of P(s(i+1)); circular waiting (deadlock) <eat> V(s(i+1)); Can we in a simple way do better •T1-T4: Get L; Get R; than this one? V(s(i)); •T5: Get R; Get L; Get L; Get R; P(s(i)); •Deadlock possible P(s(i+1)); T5 eat; P(s(1)); V(s(i+1)); P(s(5)); V(s(i)); <eat> V(s(5)); V(s((1)); •Non-symmetric solution. Still quite elegant Readers and Writers Problem • Several threads • Shared data in a critical region • Sometimes a thread wants to read the data • Sometimes a thread wants to change the data • Readers can enter a critical region together • Writers can not enter a critical region together The Readers and Writers Problem P(mutex); rc = rc+1; if(rc==1) P(db); V(mutex); <read data> while(1) { P(mutex); <do things> rc = rc-1; if(you_want) if(rc==0) V(db); READ; V(mutex); else WRITE; } P(db); <write data> V(db); One solution to the readers and writers problem But too many readers can starve writers The Readers and Writers Problem P(stopreaders); P(mutex); rc = rc+1; if(rc==1) P(db); V(mutex); V(stopreaders); while(1) { <read data> <do things> P(mutex); if(you_want) rc = rc-1; READ; if(rc==0) V(db); else V(mutex); WRITE; P(stopreaders); } P(db); V(stopreaders); <write data> V(db); Another solution to the readers and writers problem Other wait-and-signal mechanisms Event Count (Reed 1977) • Init( ec ) – Set the eventcount to 0 • Read( ec ) – Return the value of eventcount ec • Advance( ec ) – Atomically increment ec by 1 • Await( ec, v ) – Wait until ec >= v Bounded PCP with Event Count out in=out=0; B producer() { consumer() { in int next = 0; int next = 0; Capacity: N while (1) { while ( 1 ) { produce an item next++; next++; await(in, next); await(out, next - N); take an item from buffer; put the item in buffer; advance(out); advance(in);
Recommended publications
  • Synchronization Spinlocks - Semaphores
    CS 4410 Operating Systems Synchronization Spinlocks - Semaphores Summer 2013 Cornell University 1 Today ● How can I synchronize the execution of multiple threads of the same process? ● Example ● Race condition ● Critical-Section Problem ● Spinlocks ● Semaphors ● Usage 2 Problem Context ● Multiple threads of the same process have: ● Private registers and stack memory ● Shared access to the remainder of the process “state” ● Preemptive CPU Scheduling: ● The execution of a thread is interrupted unexpectedly. ● Multiple cores executing multiple threads of the same process. 3 Share Counting ● Mr Skroutz wants to count his $1-bills. ● Initially, he uses one thread that increases a variable bills_counter for every $1-bill. ● Then he thought to accelerate the counting by using two threads and keeping the variable bills_counter shared. 4 Share Counting bills_counter = 0 ● Thread A ● Thread B while (machine_A_has_bills) while (machine_B_has_bills) bills_counter++ bills_counter++ print bills_counter ● What it might go wrong? 5 Share Counting ● Thread A ● Thread B r1 = bills_counter r2 = bills_counter r1 = r1 +1 r2 = r2 +1 bills_counter = r1 bills_counter = r2 ● If bills_counter = 42, what are its possible values after the execution of one A/B loop ? 6 Shared counters ● One possible result: everything works! ● Another possible result: lost update! ● Called a “race condition”. 7 Race conditions ● Def: a timing dependent error involving shared state ● It depends on how threads are scheduled. ● Hard to detect 8 Critical-Section Problem bills_counter = 0 ● Thread A ● Thread B while (my_machine_has_bills) while (my_machine_has_bills) – enter critical section – enter critical section bills_counter++ bills_counter++ – exit critical section – exit critical section print bills_counter 9 Critical-Section Problem ● The solution should ● enter section satisfy: ● critical section ● Mutual exclusion ● exit section ● Progress ● remainder section ● Bounded waiting 10 General Solution ● LOCK ● A process must acquire a lock to enter a critical section.
    [Show full text]
  • Synchronization: Locks
    Last Class: Threads • Thread: a single execution stream within a process • Generalizes the idea of a process • Shared address space within a process • User level threads: user library, no kernel context switches • Kernel level threads: kernel support, parallelism • Same scheduling strategies can be used as for (single-threaded) processes – FCFS, RR, SJF, MLFQ, Lottery... Computer Science CS377: Operating Systems Lecture 5, page 1 Today: Synchronization • Synchronization – Mutual exclusion – Critical sections • Example: Too Much Milk • Locks • Synchronization primitives are required to ensure that only one thread executes in a critical section at a time. Computer Science CS377: Operating Systems Lecture 7, page 2 Recap: Synchronization •What kind of knowledge and mechanisms do we need to get independent processes to communicate and get a consistent view of the world (computer state)? •Example: Too Much Milk Time You Your roommate 3:00 Arrive home 3:05 Look in fridge, no milk 3:10 Leave for grocery store 3:15 Arrive home 3:20 Arrive at grocery store Look in fridge, no milk 3:25 Buy milk Leave for grocery store 3:35 Arrive home, put milk in fridge 3:45 Buy milk 3:50 Arrive home, put up milk 3:50 Oh no! Computer Science CS377: Operating Systems Lecture 7, page 3 Recap: Synchronization Terminology • Synchronization: use of atomic operations to ensure cooperation between threads • Mutual Exclusion: ensure that only one thread does a particular activity at a time and excludes other threads from doing it at that time • Critical Section: piece of code that only one thread can execute at a time • Lock: mechanism to prevent another process from doing something – Lock before entering a critical section, or before accessing shared data.
    [Show full text]
  • INF4140 - Models of Concurrency Locks & Barriers, Lecture 2
    Locks & barriers INF4140 - Models of concurrency Locks & barriers, lecture 2 Høsten 2015 31. 08. 2015 2 / 46 Practical Stuff Mandatory assignment 1 (“oblig”) Deadline: Friday September 25 at 18.00 Online delivery (Devilry): https://devilry.ifi.uio.no 3 / 46 Introduction Central to the course are general mechanisms and issues related to parallel programs Previously: await language and a simple version of the producer/consumer example Today Entry- and exit protocols to critical sections Protect reading and writing to shared variables Barriers Iterative algorithms: Processes must synchronize between each iteration Coordination using flags 4 / 46 Remember: await-example: Producer/Consumer 1 2 i n t buf, p := 0; c := 0; 3 4 process Producer { process Consumer { 5 i n t a [N ] ; . i n t b [N ] ; . 6 w h i l e ( p < N) { w h i l e ( c < N) { 7 < await ( p = c ) ; > < await ( p > c ) ; > 8 buf:=a[p]; b[c]:=buf; 9 p:=p+1; c:=c+1; 10 }} 11 }} Invariants An invariant holds in all states in all histories of the program. global invariant: c ≤ p ≤ c + 1 local (in the producer): 0 ≤ p ≤ N 5 / 46 Critical section Fundamental concept for concurrency Critical section: part of a program that is/needs to be “protected” against interference by other processes Execution under mutual exclusion Related to “atomicity” Main question today: How can we implement critical sections / conditional critical sections? Various solutions and properties/guarantees Using locks and low-level operations SW-only solutions? HW or OS support? Active waiting (later semaphores and passive waiting) 6 / 46 Access to Critical Section (CS) Several processes compete for access to a shared resource Only one process can have access at a time: “mutual exclusion” (mutex) Possible examples: Execution of bank transactions Access to a printer or other resources ..
    [Show full text]
  • Spinlock Vs. Sleeping Lock Spinlock Implementation(1)
    Semaphores EECS 3221.3 • Problems with the software solutions. Operating System Fundamentals – Complicated programming, not flexible to use. – Not easy to generalize to more complex synchronization problems. No.6 • Semaphore (a.k.a. lock): an easy-to-use synchronization tool – An integer variable S Process Synchronization(2) – wait(S) { while (S<=0) ; S-- ; Prof. Hui Jiang } Dept of Electrical Engineering and Computer – signal(S) { Science, York University S++ ; } Semaphore usage (1): Semaphore usage (2): the n-process critical-section problem as a General Synchronization Tool • The n processes share a semaphore, Semaphore mutex ; // mutex is initialized to 1. • Execute B in Pj only after A executed in Pi • Use semaphore flag initialized to 0 Process Pi do { " wait(mutex);! Pi Pj " critical section of Pi! ! … … signal(mutex);" A wait (flag) ; signal (flag) ; B remainder section of Pi ! … … ! } while (1);" " " Spinlock vs. Sleeping Lock Spinlock Implementation(1) • In uni-processor machine, disabling interrupt before modifying • Previous definition of semaphore requires busy waiting. semaphore. – It is called spinlock. – spinlock does not need context switch, but waste CPU cycles wait(S) { in a continuous loop. do { – spinlock is OK only for lock waiting is very short. Disable_Interrupt; signal(S) { • Semaphore without busy-waiting, called sleeping lock: if(S>0) { S-- ; Disable_Interrupt ; – In defining wait(), rather than busy-waiting, the process makes Enable_Interrupt ; S++ ; system calls to block itself and switch to waiting state, and return ; Enable_Interrupt ; put the process to a waiting queue associated with the } return ; semaphore. The control is transferred to CPU scheduler. Enable_Interrupt ; } – In defining signal(), the process makes system calls to pick a } while(1) ; process in the waiting queue of the semaphore, wake it up by } moving it to the ready queue to wait for CPU scheduling.
    [Show full text]
  • Processes and Threads
    Interprocess Communication Yücel Saygın |These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum 1 Inter-process Communication • Race Conditions: two or more processes are reading and writing on shared data and the final result depends on who runs precisely when • Mutual exclusion : making sure that if one process is accessing a shared memory, the other will be excluded form doing the same thing • Critical region: the part of the program where shared variables are accessed 2 Interprocess Communication Race Conditions Two processes want to access shared memory at same time 3 Critical Regions (1) Four conditions to provide mutual exclusion 1. No two processes simultaneously in critical region 2. No assumptions made about speeds or numbers of CPUs 3. No process running outside its critical region may block another process 4. No process must wait forever to enter its critical region 4 Critical Regions (2) Mutual exclusion using critical regions 5 Mutual Exclusion via Disabling Interrupts • Process disables all interrupts before entering its critical region • Enables all interrupts just before leaving its critical region • CPU is switched from one process to another only via clock or other interrupts • So disabling interrupts guarantees that there will be no process switch • Disadvantage: – Should give the power to control interrupts to user (what if a user turns off the interrupts and never turns them on again?) – Does not work in case of multiple CPUs. Only the CPU that executes the disable instruction is effected. • Not suitable approach for the general case but can be used by the Kernel when needed 6 Mutual Exclusion with Busy Waiting: (1)Lock Variables • Testing a variable until some value appears is called busy waiting.
    [Show full text]
  • Semaphores (Dijkstra 1965)
    Semaphores (Dijkstra 1965) n Semaphores have a non-negative integer value, and support two operations: n semaphore->P(): an atomic operation that waits for semaphore Semaphores & Monitors to become positive, then decrements it by 1 n semaphore->V(): an atomic operation that increments semaphore by 1, waking up a waiting P, if any. n Semaphores are like integers except: Arvind Krishnamurthy (1) non-negative values; Spring 2004 (2) only allow P&V --- can’t read/write value except to set it initially; (3) operations must be atomic: -- two P’s that occur together can’t decrement the value below zero. -- thread going to sleep in P won’t miss wakeup from V, even if they both happen at about the same time. Implementing semaphores Using interrupts P means “test” (proberen in Dutch) V means “increment” (verhogen in Dutch) class Semaphore { int value = 0; } n Binary semaphores: Semaphore::P() { n Like a lock; can only have value 0 or 1 (unlike the previous Disable interrupts; Semaphore::V() { “counting semaphore” which can be any non-negative integers) while (value == 0) { Disable interrupts; Enable interrupts; value++; Disable interrupts; Enable interrupts; } } n How to implement semaphores? Standard tricks: value = value - 1; Enable interrupts; n Can be built using interrupts disable/enable } n Or using test-and-set n Use a queue to prevent unnecessary busy-waiting n Question: Can we build semaphores using just locks? Using test&set How to use semaphores n Binary semaphores can be used for mutual exclusion: class Semaphore { int value = 0; initial value of 1; P() is called before the critical section; and V() is called after the int guard = 0; } critical section.
    [Show full text]
  • 9 Mutual Exclusion
    System Architecture 9 Mutual Exclusion Critical Section & Critical Region Busy-Waiting versus Blocking Lock, Semaphore, Monitor November 24 2008 Winter Term 2008/09 Gerd Liefländer © 2008 Universität Karlsruhe (TH), System Architecture Group 1 Overview Agenda HW Precondition Mutual Exclusion Problem Critical Regions Critical Sections Requirements for valid solutions Implementation levels User-level approaches HW support Kernel support Semaphores Monitors © 2008 Universität Karlsruhe (TH), System Architecture Group 2 Overview Literature Bacon, J.: Operating Systems (9, 10, 11) Nehmer,J.: Grundlagen moderner BS (6, 7, 8) Silberschatz, A.: Operating System Concepts (4, 6, 7) Stallings, W.: Operating Systems (5, 6) Tanenbaum, A.: Modern Operating Systems (2) Research papers on various locks © 2008 Universität Karlsruhe (TH), System Architecture Group 3 Atomic Instructions To understand concurrency, we need to know what the underlying indivisible HW instructions are Atomic instructions run to completion or not at all It is indivisible: it cannot be stopped in the middle and its state cannot be modified by someone else in the middle Fundamental building block: without atomic instructions we have no way for threads to work together properly load, store of words are usually atomic However, some instructions are not atomic VAX and IBM 360 had an instruction to copy a whole array © 2008 Universität Karlsruhe (TH), System Architecture Group 4 Mutual Exclusion © 2008 Universität Karlsruhe (TH), System Architecture Group 5 Mutual Exclusion Problem Assume at least two concurrent activities 1. Access to a physical or to a logical resource or to shared data has to be done exclusively 2. A program section of an activity, that has to be executed indivisibly and exclusively is called critical section CS1 3.
    [Show full text]
  • Models of Concurrency & Synchronization Algorithms
    Models of concurrency & synchronization algorithms Lecture 3 of TDA384/DIT391 Principles of Concurrent Programming Nir Piterman Chalmers University of Technology | University of Gothenburg SP1 2021/2022 Models of concurrency & synchronization algorithms Today's menu • Modeling concurrency • Mutual exclusion with only atomic reads and writes • Three failed attempts • Peterson’s algorithm • Mutual exclusion with strong fairness • Implementing mutual exclusion algorithms in Java • Implementing semaphores Modeling concurrency State/transition diagrams We capture the essential elements of concurrent programs using state/transition diagrams (also called: (finite) state automata, (finite) state machines, or transition systems). • states in a diagram capture possible program states • transitions connect states according to execution order Structural properties of a diagram capture semantic properties of the corresponding program. States A state captures the shared and local states of a concurrent program: States A state captures the shared and local states of a concurrent program: When unambiguous, we simplify a state with only the essential information: Initial states The initial state of a computation is marked with an incoming arrow: Final states The final states of a computation – where the program terminates – are marked with double-line edges: Transitions A transition corresponds to the execution of one atomic instruction, and it is an arrow connecting two states (or a state to itself): A complete state/transition diagram The complete state/transition diagram for the concurrent counter example explicitly shows all possible interleavings: State/transition diagram with locks? The state/transition diagram of the concurrent counter example using locks should contain no (states representing) race conditions: Locking Locking and unlocking are considered atomic operations.
    [Show full text]
  • Preview – Interprocess Communication
    Preview – Interprocess Communication Race Condition Mutual Exclusion with Busy Waiting Disabling Interrupts Lock Variables Strict Alternation Peterson’s Solution Test and Set Lock – help from hardware Sleep and Wakeup The Producer-Consumer Problem Race Condition in Producer-Consumer Problem Semaphores The producer-consumer problem using semaphores Mutexes Monitors CS 431 Operating Systems 1 Interprocess Communication Critical section (critical region) – The part of program where the shared memory is accessed. Three issues in InterProcess Communication (IPC) 1. How one process can pass information to another 2. How to make sure two or more processes do not get into the critical region at the same time 3. Proper sequencing when dependencies are present (e.g. A creates outputs, B consumes the outputs) CS 431 Operating Systems 2 Race Condition A situation where two or more processes are reading or writing some shared data and the final result depends on who runs precisely when, is called race condition. Network Printer Spooler Directory CS 431 Operating Systems 3 Race Condition Process B will never Process A tries to send a job to receive any output from spooler, Process A reads the printer !! in = 7, process A times out and goes to “ready” state before updating in = in + 1. CPU switches to Process B. Process B also tries to send a job to spooler. Process B reads in = 7, writes its job name in slot 7, updates i to 8 and then goes to “ready” state. CPU switches to Process A. Process A starts from the place it left off. Process writes its job name in slot 7, update i to 8.
    [Show full text]
  • Pthreads: Busy Wait, Mutexes, Semaphores, CondiOns, and Barriers Bryan Mills Chapter 4.2-4.8
    Pthreads: Busy Wait, Mutexes, Semaphores, Condi9ons, and Barriers Bryan Mills Chapter 4.2-4.8 Spring 2017 A Shared Memory System Copyright © 2010, Elsevier Inc. All rights Reserved Pthreads • Pthreads is a POSIX standard for describing a thread model, it specifies the API and the seman9cs of the calls. • Model popular –prac9cally all major thread libraries on Unix systems are Pthreads- compable • The Pthreads API is only available on POSIXR systems — Linux, MacOS X, Solaris, HPUX, … Preliminaries • Include pthread.h in the main file • Compile program with –lpthread – gcc –o test test.c –lpthread – may not report compilaon errors otherwise but calls will fail • Check return values on common func9ons Include #include <stdio.h> #include <pthread.h> This includes the pthreads library. #define NUM_THREADS 10 void *hello (void *rank) { printf("Hello Thread”); } int main() { pthread_t ids[NUM_THREADS]; for (int i=0; i < NUM_THREADS; i++) { pthread_create(&ids[i], NULL, hello, &i); } for (int i=0; i < NUM_THREADS; i++) { pthread_join(ids[i], NULL); } } C Libraries • Libraries are created from many source files, and are either built as: – Archive files (libmine.a) that are stacally linked. – Shared object files (libmine.so) that are dynamically linked • Link libraries using the gcc command line op9ons -L for the path to the library files and -l to link in a library (a .so or a .a): gcc -o myprog myprog.c -L/home/bnm/lib –lmine • You may also need to specify the library path: -I /home/bnm/include Run9me Linker • Libraries that are dynamically linked need to be able to find the shared object (.so) • LD_LIBRARY_PATH environment variable is used to tell the system where to find these files.
    [Show full text]
  • Chapter 12 (Concurrency) of Programming Language Pragmatics
    Contents 12 Concurrency 1 12.1BackgroundandMotivation........................... 2 12.1.1ALittleHistory............................. 2 12.1.2TheCaseforMulti-ThreadedPrograms................ 4 12.1.3MultiprocessorArchitecture....................... 8 12.2 Concurrent Programming Fundamentals . 11 12.2.1CommunicationandSynchronization.................. 12 12.2.2LanguagesandLibraries......................... 13 12.2.3ThreadCreationSyntax......................... 14 12.2.4ImplementationofThreads....................... 22 12.3SharedMemory.................................. 26 12.3.1Busy-WaitSynchronization....................... 27 12.3.2SchedulerImplementation........................ 30 12.3.3Scheduler-BasedSynchronization.................... 33 12.3.4ImplicitSynchronization......................... 41 12.4MessagePassing................................. 43 12.4.1NamingCommunicationPartners.................... 43 12.4.2Sending.................................. 47 12.4.3 Receiving . 50 12.4.4 RemoteProcedureCall......................... 56 SummaryandConcludingRemarks.......................... 58 ReviewQuestions.................................... 60 Exercises........................................ 61 BibliographicNotes.................................. 66 From Programming Language Pragmatics, by Michael L. Scott. Copyright c 2000, Morgan Kaufmann Publishers; all rights reserved. This material may not be copied or distributed without permission of the publisher. i Chapter 12 Concurrency The bulk of this text has focused, implicitly, on sequential
    [Show full text]
  • Parallel and Concurrent Programming CS3S666
    Parallel and Concurrent Programming CS3S666 Semaphores Last Time Looked at Shared Memory. Looked at race conditions. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers. We can do so by having an integer count that keeps track of the number of full buffers. Initially, count is set to 0. It is incremented by the producer after it produces a new buffer and is decremented by the consumer after it consumes a buffer. Producer while (true) { /* produce an item and put in nextProduced */ while (counter == BUFFER_SIZE); // do nothing buffer [in] = nextProduced; in = (in + 1) % BUFFER_SIZE; counter++; } Consumer while (true) { while (counter == 0); // do nothing nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--; // consume the item in nextConsumed } Race Condition counter++ could be implemented as register1 = counter register1 = register1 + 1 counter = register1 counter-- could be implemented as register2 = counter register2 = register2 - 1 counter = register2 Consider this execution interleaving with “count = 5” initially: S0: producer execute register1 = counter {register1 = 5} S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = counter {register2 = 5} S3: consumer execute register2 = register2 - 1 {register2 = 4} S4: producer execute counter
    [Show full text]