Synchronization, Critical Sections and Semaphores

Total Page:16

File Type:pdf, Size:1020Kb

Synchronization, Critical Sections and Semaphores Outline of Lecture-09 Synchronization, Critical ❚ Problems with concurrent access to shared data Sections and Semaphores Ø Race condition and critical section Ø General structure for enforce critical section (SGG Chapter 6) ❚ Software based solutions: Ø Simple solution Ø Peterson’s solution Instructor: Dr. Tongping Liu ❚ Hardware based solution Ø Disable interrupts Ø TestAndSet Ø Swap ❚ OS solution -- Semaphore ❚ Using semaphores to solve real issues 1 Department of Computer Science @ UTSA 2 Objectives Threads Memory Model ❚ To introduce the critical-section problem, whose ❚! Conceptual model: solutions can be used to ensure the consistency of Ø!Multiple threads run in the same context of a process shared data" Ø!Each thread has its own separate thread context ❚! To present both software and hardware solutions of ü!Thread ID, stack, stack pointer, PC, and GP registers Ø!All threads share the remaining process context the critical-section problem" ü!Code, data, heap, and shared library segments ❚! To introduce the concept of an atomic transaction ü!Open files and installed handlers and describe mechanisms to ensure atomicity# " ❚! Operationally, this model is not strictly enforced: ❚! Shared Data! Ø!Register values are truly separate and protected, but… Ø!at the same logical address space " Ø!Any thread can read and write the stack of any other Ø!at different address space through messages (later in DS)" thread 1 Mapping Variable Instances to Memory Different outputs Global var: 1 instance (ptr [data]) Variable Referenced by Referenced by Referenced by instance Local vars main thread?: 1 instance ( peer thread 0?i.m, msgs.m) peer thread 1? [0]: Hello from foo (svar=1) yes yes yes ptr Local var : 2 instances ( char **ptr; /* global */ cnt myid.t0no [peer thread 0’s stack],yes yes [1]: Hello from bar (svar=2) i.m myid.t1yes [peer thread 1’s stack] no no int main() msgs.m ) yes yes yes { myid.t0 no yes no [1]: Hello from bar (svar=1) int i; myid.t1 no no yes pthread_t tid; /* thread routine */ [0]: Hello from foo (svar=2) char *msgs[2] = { What are possiblevoid *thread(voidoutputs? *vargp) "Hello from foo", { "Hello from bar" int myid = (int)vargp; [1]: Hello from bar (svar=1) }; static int cnt = 0; ptr = msgs; [0]: Hello from foo (svar=1) printf("[%d]: %s (svar=%d)\n", for (i = 0; i < 2; i++) myid, ptr[myid], ++cnt); Pthread_create(&tid, } [0]: Hello from foo (svar=1) NULL, thread, [1]: Hello from bar (svar=1) (void *)i); Local sta-c var: 1 instance (cnt [data]) ……. Conclusion: different threads may execute in a random order } sharing.c! 6 Concurrent Access to Shared Data What is the problem? ❚ Two threads A and B access a shared variable ❚ Observe: In a time-shared system, the exact “Balance” instruction execution order cannot be predicted § Scenario 1: ! § Scenario 2: ! Thread A: Thread B: A1. LOAD R1, BALANCE B1. LOAD R1, BALANCE Balance = Balance + 100 Balance = Balance - 200 A2. ADD R1, 100 B2. SUB R1, 200 A3. STORE BALANCE, R1 Context Switch! Context Switch! A1. LOAD R1, BALANCE B1. LOAD R1, BALANCE A2. ADD R1, 100 B2. SUB R1, 200 A3. STORE BALANCE, R1 Context Switch! B1. LOAD R1, BALANCE B3. STORE BALANCE, R1 A1. LOAD R1, BALANCE B3. STORE BALANCE, R1 B2. SUB R1, 200 A2. ADD R1, 100 § Sequential correct execution § Mixed wrong execution A3. STORE BALANCE, R1 B3. STORE BALANCE, R1 § Balance is effectively § Balance is effectively decreased by 100! decreased by 200!!! Department of Computer Science @ UTSA 7 What are possible results for this program? 8 2 a = 0; b = 0; // Initial state Common Properties of Two Examples Thread 1 Thread 2 ❚ There are some shared variables Ø Account T1-1: if (b == 0) T2-1: if (a == 0) Ø a, b ❚ At least one operation is a write operation T1-2: a = 1; T2-2: b = 1; ❚ The results of the execution are non-deterministic, depending on the execution order 99.43% 0.56% 0.01% a = 1 a = 0 a = 1 a = 1 b = 0 b = 1 b = 1 b = 1 Department of Computer Science @ UTSA 10 Race Conditions Definition of Synchronization ❚ Multiple processes/threads write/read shared data and the ❚ Cooperating processes/threads share data & have outcome depends on the particular order to access effects on each other à executions are NOT shared data are called race conditions reproducible with non-deterministic exec. speed Ø A serious problem for concurrent system using shared variables! ❚ Concurrent executions Ø Single processor à achieved by time slicing How do we solve the problem?! Ø Parallel/distributed systems à truly simultaneous ❚ Need to make sure that some high-level code sections are ❚ Synchronization à getting processes/threads to executed atomically work together in a coordinated manner Ø Atomic operation means that it completes in its entirety without worrying about interruption by any other potentially conflict- causing process Department of Computer Science @ UTSA 11 12 3 Critical-Section (CS) Problem General Structure for Critical Sections ❚ Multiple processes/threads compete to access the do { shared data …… entry section critical section ❚ Critical section/region: a piece of code that exit section accesses a shared resource (data structure or device) must not be concurrently accessed by more remainder statements than one thread of execution. } while (1); ❚ In the entry section, the process requests ❚ Problem – ensure that only one process/thread is “permission”. allowed to execute in its critical section (for the same shared data) at any time. The execution of critical sections must be mutually exclusive in time. Department of Computer Science @ UTSA 13 Department of Computer Science @ UTSA 14 Requirements for CS Solutions Mutual Exclusion ❚ Mutual Exclusion Ø At most one task can be in its CS at any time ❚ Progress Ø If all other tasks are in their remainder sections, a task is allowed to enter its CS Ø Only those tasks that are not in their remainder section can decide which task can enter its CS next, and this decision cannot be postponed infinitely. ❚ Bounded Waiting Ø Once a task has made a request to enter its CS à other tasks can only enter their CSs with a bounded number of times 15 Department of Computer Science @ UTSA 16 4 Solutions for CS Problem Outline of Lecture-09 ❚ Software based solution" ❚! Problems with concurrent access to shared data Ø!Peterson’s solution" Ø!Race condition and critical section Ø!General structure for enforce critical section ❚! Hardware based solutions" ❚! Software based solutions: Ø!Disable interrupts" Ø!Simple solution Ø!Atomic instructions: TestAndSet and Swap " Ø!Peterson’s solution ❚! Hardware based solution ❚! OS solutions:" Ø!Disable interrupts Ø!Semaphore" Ø!TestAndSet Ø!Swap ❚! OS solution -- Semaphore ❚! Using semaphores to solve real issues Department of Computer Science @ UTSA 17 Department of Computer Science @ UTSA 18 A Simple Solution Peterson’s Solution ❚! Bool turn; à indicate whose turn to enter CS ❚! Three shared variables: turn and flag[2] ❚! T0 and T1: alternate between CS and remainder Pros: Cons: How does this solution satisfy the CS requirements (mutual exclusion, progress, bounded waiting) ? 1.! Pi’s critical section is executed 1.! Progress is not satisfied since it iff turn = i requires strict alternation 2.! Pi is busy waiting if Pj is in CS 2.! A process cannot enter the CS (mutual exclusion) more often than the other 19 20 5 Peterson’s Solution Peterson’s Solution ❚ Guarantee mutual exclusion? ❚ Progress Guarantee? Ø Q: when process 0 is in its critical section, can Ø Q: when P0 is in its remainder section, can P1 get into process1 get into its critical section? its critical section? à flag[0] = 1 and either flag[1]=0 or turn = 0 à flag[0] = 0 à Thus, for process1: under flag[0]=1, if flag[1]=0, P1 is in à Process1: if flag[0]=0, then P1 can enter without its CS reminder section; if turn = 0, it waits before its CS without waiting 21 22 Peterson’s Solution Peterson’s Solution (cont.) ❚ Mutual exclusion ❚ Toggle first two lines for one process ❚ Progress ❚ Bounded waiting: alternation Any issues of Peterson’s solution? ❚ Busy waiting ❚ Each thread should have different code ❚ Can it support more than two threads? How does this solution break the CS requirements ❚ Error-prone (mutual exclusion, progress, bounded waiting) ? 23 24 6 Outline of Lecture-09 Hardware Solution 1: Disable Interrupt ❚ Problems with concurrent access to shared data ❚ Uniprocessors – could disable interrupts" Ø Race condition and critical section Ø!Currently running code would execute without preemption" Ø General structure for enforce critical section ❚ Software based solutions: do { What are the problems with this solution? Ø Simple solution …… 1.! Mutual exclusion is preserved but Ø Peterson’s solution DISABLE INTERRUPT efficiency of execution is degraded ❚ Hardware based solution critical section •! while in CS, we cannot Ø Disable interrupts ENABLE INTERRUPT interleave execution with other Ø TestAndSet processes that are in RS RemainderSection Ø Swap 2.! On a multiprocessor, mutual } while (1); ❚ OS solution -- Semaphore exclusion is not preserved ❚ Using semaphores to solve real issues 3.! Tracking time is impossible in CS Department of Computer Science @ UTSA 25 Department of Computer Science @ UTSA 26 Hardware Support for Synchronization Hardware Instruction: TestAndSet ❚! Synchronization ❚! The TestAndSet instruction tests and modifies the Ø!Need to check and set a value atomically content of a word atomically (non-interruptible)" ❚! IA32 hardware provides a special instruction: xchg ❚! Keep setting the lock to 1 and return old value. Ø When the instruction accesses
Recommended publications
  • An Introduction to Linux IPC
    An introduction to Linux IPC Michael Kerrisk © 2013 linux.conf.au 2013 http://man7.org/ Canberra, Australia [email protected] 2013-01-30 http://lwn.net/ [email protected] man7 .org 1 Goal ● Limited time! ● Get a flavor of main IPC methods man7 .org 2 Me ● Programming on UNIX & Linux since 1987 ● Linux man-pages maintainer ● http://www.kernel.org/doc/man-pages/ ● Kernel + glibc API ● Author of: Further info: http://man7.org/tlpi/ man7 .org 3 You ● Can read a bit of C ● Have a passing familiarity with common syscalls ● fork(), open(), read(), write() man7 .org 4 There’s a lot of IPC ● Pipes ● Shared memory mappings ● FIFOs ● File vs Anonymous ● Cross-memory attach ● Pseudoterminals ● proc_vm_readv() / proc_vm_writev() ● Sockets ● Signals ● Stream vs Datagram (vs Seq. packet) ● Standard, Realtime ● UNIX vs Internet domain ● Eventfd ● POSIX message queues ● Futexes ● POSIX shared memory ● Record locks ● ● POSIX semaphores File locks ● ● Named, Unnamed Mutexes ● System V message queues ● Condition variables ● System V shared memory ● Barriers ● ● System V semaphores Read-write locks man7 .org 5 It helps to classify ● Pipes ● Shared memory mappings ● FIFOs ● File vs Anonymous ● Cross-memory attach ● Pseudoterminals ● proc_vm_readv() / proc_vm_writev() ● Sockets ● Signals ● Stream vs Datagram (vs Seq. packet) ● Standard, Realtime ● UNIX vs Internet domain ● Eventfd ● POSIX message queues ● Futexes ● POSIX shared memory ● Record locks ● ● POSIX semaphores File locks ● ● Named, Unnamed Mutexes ● System V message queues ● Condition variables ● System V shared memory ● Barriers ● ● System V semaphores Read-write locks man7 .org 6 It helps to classify ● Pipes ● Shared memory mappings ● FIFOs ● File vs Anonymous ● Cross-memoryn attach ● Pseudoterminals tio a ● proc_vm_readv() / proc_vm_writev() ● Sockets ic n ● Signals ● Stream vs Datagram (vs uSeq.
    [Show full text]
  • Synchronization Spinlocks - Semaphores
    CS 4410 Operating Systems Synchronization Spinlocks - Semaphores Summer 2013 Cornell University 1 Today ● How can I synchronize the execution of multiple threads of the same process? ● Example ● Race condition ● Critical-Section Problem ● Spinlocks ● Semaphors ● Usage 2 Problem Context ● Multiple threads of the same process have: ● Private registers and stack memory ● Shared access to the remainder of the process “state” ● Preemptive CPU Scheduling: ● The execution of a thread is interrupted unexpectedly. ● Multiple cores executing multiple threads of the same process. 3 Share Counting ● Mr Skroutz wants to count his $1-bills. ● Initially, he uses one thread that increases a variable bills_counter for every $1-bill. ● Then he thought to accelerate the counting by using two threads and keeping the variable bills_counter shared. 4 Share Counting bills_counter = 0 ● Thread A ● Thread B while (machine_A_has_bills) while (machine_B_has_bills) bills_counter++ bills_counter++ print bills_counter ● What it might go wrong? 5 Share Counting ● Thread A ● Thread B r1 = bills_counter r2 = bills_counter r1 = r1 +1 r2 = r2 +1 bills_counter = r1 bills_counter = r2 ● If bills_counter = 42, what are its possible values after the execution of one A/B loop ? 6 Shared counters ● One possible result: everything works! ● Another possible result: lost update! ● Called a “race condition”. 7 Race conditions ● Def: a timing dependent error involving shared state ● It depends on how threads are scheduled. ● Hard to detect 8 Critical-Section Problem bills_counter = 0 ● Thread A ● Thread B while (my_machine_has_bills) while (my_machine_has_bills) – enter critical section – enter critical section bills_counter++ bills_counter++ – exit critical section – exit critical section print bills_counter 9 Critical-Section Problem ● The solution should ● enter section satisfy: ● critical section ● Mutual exclusion ● exit section ● Progress ● remainder section ● Bounded waiting 10 General Solution ● LOCK ● A process must acquire a lock to enter a critical section.
    [Show full text]
  • Synchronization: Locks
    Last Class: Threads • Thread: a single execution stream within a process • Generalizes the idea of a process • Shared address space within a process • User level threads: user library, no kernel context switches • Kernel level threads: kernel support, parallelism • Same scheduling strategies can be used as for (single-threaded) processes – FCFS, RR, SJF, MLFQ, Lottery... Computer Science CS377: Operating Systems Lecture 5, page 1 Today: Synchronization • Synchronization – Mutual exclusion – Critical sections • Example: Too Much Milk • Locks • Synchronization primitives are required to ensure that only one thread executes in a critical section at a time. Computer Science CS377: Operating Systems Lecture 7, page 2 Recap: Synchronization •What kind of knowledge and mechanisms do we need to get independent processes to communicate and get a consistent view of the world (computer state)? •Example: Too Much Milk Time You Your roommate 3:00 Arrive home 3:05 Look in fridge, no milk 3:10 Leave for grocery store 3:15 Arrive home 3:20 Arrive at grocery store Look in fridge, no milk 3:25 Buy milk Leave for grocery store 3:35 Arrive home, put milk in fridge 3:45 Buy milk 3:50 Arrive home, put up milk 3:50 Oh no! Computer Science CS377: Operating Systems Lecture 7, page 3 Recap: Synchronization Terminology • Synchronization: use of atomic operations to ensure cooperation between threads • Mutual Exclusion: ensure that only one thread does a particular activity at a time and excludes other threads from doing it at that time • Critical Section: piece of code that only one thread can execute at a time • Lock: mechanism to prevent another process from doing something – Lock before entering a critical section, or before accessing shared data.
    [Show full text]
  • Multithreading Design Patterns and Thread-Safe Data Structures
    Lecture 12: Multithreading Design Patterns and Thread-Safe Data Structures Principles of Computer Systems Autumn 2019 Stanford University Computer Science Department Lecturer: Chris Gregg Philip Levis PDF of this presentation 1 Review from Last Week We now have three distinct ways to coordinate between threads: mutex: mutual exclusion (lock), used to enforce critical sections and atomicity condition_variable: way for threads to coordinate and signal when a variable has changed (integrates a lock for the variable) semaphore: a generalization of a lock, where there can be n threads operating in parallel (a lock is a semaphore with n=1) 2 Mutual Exclusion (mutex) A mutex is a simple lock that is shared between threads, used to protect critical regions of code or shared data structures. mutex m; mutex.lock() mutex.unlock() A mutex is often called a lock: the terms are mostly interchangeable When a thread attempts to lock a mutex: Currently unlocked: the thread takes the lock, and continues executing Currently locked: the thread blocks until the lock is released by the current lock- holder, at which point it attempts to take the lock again (and could compete with other waiting threads). Only the current lock-holder is allowed to unlock a mutex Deadlock can occur when threads form a circular wait on mutexes (e.g. dining philosophers) Places we've seen an operating system use mutexes for us: All file system operation (what if two programs try to write at the same time? create the same file?) Process table (what if two programs call fork() at the same time?) 3 lock_guard<mutex> The lock_guard<mutex> is very simple: it obtains the lock in its constructor, and releases the lock in its destructor.
    [Show full text]
  • Semaphores Semaphores (Basic) Semaphore General Vs. Binary
    Concurrent Programming (RIO) 20.1.2012 Lesson 6 Synchronization with HW support • Disable interrupts – Good for short time wait, not good for long time wait – Not good for multiprocessors Semaphores • Interrupts are disabled only in the processor used • Test-and-set instruction (etc) Ch 6 [BenA 06] – Good for short time wait, not good for long time wait – Nor so good in single processor system • May reserve CPU, which is needed by the process holding the lock – Waiting is usually “busy wait” in a loop Semaphores • Good for mutex, not so good for general synchronization – E.g., “wait until process P34 has reached point X” Producer-Consumer Problem – No support for long time wait (in suspended state) Semaphores in C--, Java, • Barrier wait in HW in some multicore architectures – Stop execution until all cores reached barrier_waitinstruction Linux, Minix – No busy wait, because execution pipeline just stops – Not to be confused with barrier_wait thread operation 20.1.2012 Copyright Teemu Kerola 2012 1 20.1.2012 Copyright Teemu Kerola 2012 2 Semaphores (Basic) Semaphore semaphore S public create initial value integer value semafori public P(S) S.value private S.V Edsger W. Dijkstra V(S) http://en.wikipedia.org/wiki/THE_operating_system public private S.list S.L • Dijkstra, 1965, THE operating system queue of waiting processes • Protected variable, abstract data type (object) • P(S) WAIT(S), Down(S) – Allows for concurrency solutions if used properly – If value > 0, deduct 1 and proceed • Atomic operations – o/w, wait suspended in list (queue?) until released – Create (SemaName, InitValue) – P, down, wait, take, pend, • V(S) SIGNAL(S), Up(S) passeren, proberen, try, prolaad, try to decrease – If someone in queue, release one (first?) of them – V, up, signal, release, post, – o/w, increase value by one vrijgeven, verlagen, verhoog, increase 20.1.2012 Copyright Teemu Kerola 2012 3 20.1.2012 Copyright Teemu Kerola 2012 4 General vs.
    [Show full text]
  • CSC 553 Operating Systems Multiple Processes
    CSC 553 Operating Systems Lecture 4 - Concurrency: Mutual Exclusion and Synchronization Multiple Processes • Operating System design is concerned with the management of processes and threads: • Multiprogramming • Multiprocessing • Distributed Processing Concurrency Arises in Three Different Contexts: • Multiple Applications – invented to allow processing time to be shared among active applications • Structured Applications – extension of modular design and structured programming • Operating System Structure – OS themselves implemented as a set of processes or threads Key Terms Related to Concurrency Principles of Concurrency • Interleaving and overlapping • can be viewed as examples of concurrent processing • both present the same problems • Uniprocessor – the relative speed of execution of processes cannot be predicted • depends on activities of other processes • the way the OS handles interrupts • scheduling policies of the OS Difficulties of Concurrency • Sharing of global resources • Difficult for the OS to manage the allocation of resources optimally • Difficult to locate programming errors as results are not deterministic and reproducible Race Condition • Occurs when multiple processes or threads read and write data items • The final result depends on the order of execution – the “loser” of the race is the process that updates last and will determine the final value of the variable Operating System Concerns • Design and management issues raised by the existence of concurrency: • The OS must: – be able to keep track of various processes
    [Show full text]
  • Concurrent Objects and Linearizability Concurrent Computaton
    Concurrent Objects and Concurrent Computaton Linearizability memory Nir Shavit Subing for N. Lynch object Fall 2003 object © 2003 Herlihy and Shavit 2 Objectivism FIFO Queue: Enqueue Method • What is a concurrent object? q.enq( ) – How do we describe one? – How do we implement one? – How do we tell if we’re right? © 2003 Herlihy and Shavit 3 © 2003 Herlihy and Shavit 4 FIFO Queue: Dequeue Method Sequential Objects q.deq()/ • Each object has a state – Usually given by a set of fields – Queue example: sequence of items • Each object has a set of methods – Only way to manipulate state – Queue example: enq and deq methods © 2003 Herlihy and Shavit 5 © 2003 Herlihy and Shavit 6 1 Pre and PostConditions for Sequential Specifications Dequeue • If (precondition) – the object is in such-and-such a state • Precondition: – before you call the method, – Queue is non-empty • Then (postcondition) • Postcondition: – the method will return a particular value – Returns first item in queue – or throw a particular exception. • Postcondition: • and (postcondition, con’t) – Removes first item in queue – the object will be in some other state • You got a problem with that? – when the method returns, © 2003 Herlihy and Shavit 7 © 2003 Herlihy and Shavit 8 Pre and PostConditions for Why Sequential Specifications Dequeue Totally Rock • Precondition: • Documentation size linear in number –Queue is empty of methods • Postcondition: – Each method described in isolation – Throws Empty exception • Interactions among methods captured • Postcondition: by side-effects
    [Show full text]
  • INF4140 - Models of Concurrency Locks & Barriers, Lecture 2
    Locks & barriers INF4140 - Models of concurrency Locks & barriers, lecture 2 Høsten 2015 31. 08. 2015 2 / 46 Practical Stuff Mandatory assignment 1 (“oblig”) Deadline: Friday September 25 at 18.00 Online delivery (Devilry): https://devilry.ifi.uio.no 3 / 46 Introduction Central to the course are general mechanisms and issues related to parallel programs Previously: await language and a simple version of the producer/consumer example Today Entry- and exit protocols to critical sections Protect reading and writing to shared variables Barriers Iterative algorithms: Processes must synchronize between each iteration Coordination using flags 4 / 46 Remember: await-example: Producer/Consumer 1 2 i n t buf, p := 0; c := 0; 3 4 process Producer { process Consumer { 5 i n t a [N ] ; . i n t b [N ] ; . 6 w h i l e ( p < N) { w h i l e ( c < N) { 7 < await ( p = c ) ; > < await ( p > c ) ; > 8 buf:=a[p]; b[c]:=buf; 9 p:=p+1; c:=c+1; 10 }} 11 }} Invariants An invariant holds in all states in all histories of the program. global invariant: c ≤ p ≤ c + 1 local (in the producer): 0 ≤ p ≤ N 5 / 46 Critical section Fundamental concept for concurrency Critical section: part of a program that is/needs to be “protected” against interference by other processes Execution under mutual exclusion Related to “atomicity” Main question today: How can we implement critical sections / conditional critical sections? Various solutions and properties/guarantees Using locks and low-level operations SW-only solutions? HW or OS support? Active waiting (later semaphores and passive waiting) 6 / 46 Access to Critical Section (CS) Several processes compete for access to a shared resource Only one process can have access at a time: “mutual exclusion” (mutex) Possible examples: Execution of bank transactions Access to a printer or other resources ..
    [Show full text]
  • Lesson-9: P and V SEMAPHORES
    Inter-Process Communication and Synchronization of Processes, Threads and Tasks: Lesson-9: P and V SEMAPHORES Chapter-9 L9: "Embedded Systems - Architecture, Programming and Design", 2015 Raj Kamal, Publs.: McGraw-Hill Education 1 1. P and V SEMAPHORES Chapter-9 L9: "Embedded Systems - Architecture, Programming and Design", 2015 Raj Kamal, Publs.: McGraw-Hill Education 2 P and V semaphores • An efficient synchronisation mechanism • POSIX 1003.1.b, an IEEE standard. • POSIX─ for portable OS interfaces in Unix. • P and V semaphore ─ represents an integer in place of binary or unsigned integers Chapter-9 L9: "Embedded Systems - Architecture, Programming and Design", 2015 Raj Kamal, Publs.: McGraw-Hill Education 3 P and V semaphore Variables • The semaphore, apart from initialization, is accessed only through two standard atomic operations─ P and V • P (for wait operation)─ derived from a Dutch word ‘Proberen’, which means 'to test'. • V (for signal passing operation)─ derived from the word 'Verhogen' which means 'to increment'. Chapter-9 L9: "Embedded Systems - Architecture, Programming and Design", 2015 Raj Kamal, Publs.: McGraw-Hill Education 4 P and V Functions for Semaphore P semaphore function signals that the task requires a resource and if not available waits for it. V semaphore function signals which the task passes to the OS that the resource is now free for the other users. Chapter-9 L9: "Embedded Systems - Architecture, Programming and Design", 2015 Raj Kamal, Publs.: McGraw-Hill Education 5 P Function─ P (&Sem1) 1. /* Decrease the semaphore variable*/ sem_1 = sem_1 1; 2. /* If sem_1 is less than 0, send a message to OS by calling a function waitCallToOS.
    [Show full text]
  • Computer Science 322 Operating Systems Topic Notes: Process
    Computer Science 322 Operating Systems Mount Holyoke College Spring 2008 Topic Notes: Process Synchronization Cooperating Processes An Independent process is not affected by other running processes. Cooperating processes may affect each other, hopefully in some controlled way. Why cooperating processes? • information sharing • computational speedup • modularity or convenience It’s hard to find a computer system where processes do not cooperate. Consider the commands you type at the Unix command line. Your shell process and the process that executes your com- mand must cooperate. If you use a pipe to hook up two commands, you have even more process cooperation (See the shell lab later this semester). For the processes to cooperate, they must have a way to communicate with each other. Two com- mon methods: • shared variables – some segment of memory accessible to both processes • message passing – a process sends an explicit message that is received by another For now, we will consider shared-memory communication. We saw that threads, for example, share their global context, so that is one way to get two processes (threads) to share a variable. Producer-Consumer Problem The classic example for studying cooperating processes is the Producer-Consumer problem. Producer Consumer Buffer CS322 OperatingSystems Spring2008 One or more produces processes is “producing” data. This data is stored in a buffer to be “con- sumed” by one or more consumer processes. The buffer may be: • unbounded – We assume that the producer can continue producing items and storing them in the buffer at all times. However, the consumer must wait for an item to be inserted into the buffer before it can take one out for consumption.
    [Show full text]
  • 4. Process Synchronization
    Lecture Notes for CS347: Operating Systems Mythili Vutukuru, Department of Computer Science and Engineering, IIT Bombay 4. Process Synchronization 4.1 Race conditions and Locks • Multiprogramming and concurrency bring in the problem of race conditions, where multiple processes executing concurrently on shared data may leave the shared data in an undesirable, inconsistent state due to concurrent execution. Note that race conditions can happen even on a single processor system, if processes are context switched out by the scheduler, or are inter- rupted otherwise, while updating shared data structures. • Consider a simple example of two threads of a process incrementing a shared variable. Now, if the increments happen in parallel, it is possible that the threads will overwrite each other’s result, and the counter will not be incremented twice as expected. That is, a line of code that increments a variable is not atomic, and can be executed concurrently by different threads. • Pieces of code that must be accessed in a mutually exclusive atomic manner by the contending threads are referred to as critical sections. Critical sections must be protected with locks to guarantee the property of mutual exclusion. The code to update a shared counter is a simple example of a critical section. Code that adds a new node to a linked list is another example. A critical section performs a certain operation on a shared data structure, that may temporar- ily leave the data structure in an inconsistent state in the middle of the operation. Therefore, in order to maintain consistency and preserve the invariants of shared data structures, critical sections must always execute in a mutually exclusive fashion.
    [Show full text]
  • The Critical Section Problem Problem Description
    The Critical Section Problem Problem Description Informally, a critical section is a code segment that accesses shared variables and has to be executed as an atomic action. The critical section problem refers to the problem of how to ensure that at most one process is executing its critical section at a given time. Important: Critical sections in different threads are not necessarily the same code segment! Concurrent Software Systems 2 1 Problem Description Formally, the following requirements should be satisfied: Mutual exclusion: When a thread is executing in its critical section, no other threads can be executing in their critical sections. Progress: If no thread is executing in its critical section, and if there are some threads that wish to enter their critical sections, then one of these threads will get into the critical section. Bounded waiting: After a thread makes a request to enter its critical section, there is a bound on the number of times that other threads are allowed to enter their critical sections, before the request is granted. Concurrent Software Systems 3 Problem Description In discussion of the critical section problem, we often assume that each thread is executing the following code. It is also assumed that (1) after a thread enters a critical section, it will eventually exit the critical section; (2) a thread may terminate in the non-critical section. while (true) { entry section critical section exit section non-critical section } Concurrent Software Systems 4 2 Solution 1 In this solution, lock is a global variable initialized to false. A thread sets lock to true to indicate that it is entering the critical section.
    [Show full text]