<<

Review: Monitor with Condition Variables CS162 Operating Systems and Systems Programming Lecture 9

Synchronization, Readers/Writers example, : the lock provides mutual exclusion to shared data – Always acquire before accessing shared data structure September 26th, 2016 – Always release after finishing with shared data – Lock initially free Prof. Anthony D. Joseph • Condition Variable: a queue of threads waiting for something inside a http://cs162.eecs.Berkeley.edu – Key idea: make it possible to go to sleep inside critical section by atomically releasing lock at time we go to sleep – Contrast to semaphores: Can’t wait inside critical section 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.2

Review: Condition Variables Review: vs. Hoare Monitors • How do we change the RemoveFromQueue() routine to wait until • Need to be careful about precise definition of signal and wait. something is on the queue? Consider a piece of our dequeue code: – Could do this by keeping a count of the number of things on the queue while=(queue.isEmpty())={ dataready.wait(&lock);=//=If=nothing,=sleep (with semaphores), but error prone } • Condition Variable: a queue of threads waiting for something inside a item===queue.dequeue(); //=Get=next=item critical section – Why didn’t we do this? – Key idea: allow sleeping inside critical section by atomically releasing if=(queue.isEmpty())={ lock at time we go to sleep dataready.wait(&lock);=//=If=nothing,=sleep } – Contrast to semaphores: Can’t wait inside critical section item===queue.dequeue(); //=Get=next=item • Operations: • Answer: depends on the type of scheduling – Wait(&lock): Atomically release lock and go to sleep. Re-acquire – Hoare-style (most textbooks): lock later, before returning. » Signaler gives lock, CPU to waiter; waiter runs immediately – Signal(): Wake up one waiter, if any » Waiter gives up lock, processor back to signaler when it exits critical section or if it waits again – Broadcast(): Wake up all waiters – Mesa-style (most real operating systems): • Rule: Must hold lock when doing condition variable ops! » Signaler keeps lock and processor – In Birrell paper, he says can perform signal() outside of lock – » Waiter placed on ready queue with no special priority IGNORE HIM (this is only an optimization) » Practically, need to check condition again after wait 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.3 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.4 Extended Example: Readers/Writers Problem Basic Readers/Writers Solution W • Correctness Constraints: – Readers can access database when no writers – Writers can access database when no readers or writers R – Only one manipulates state variables at a time R • Basic structure of a solution: R – Reader() Wait=until=no=writers Access=data=base Check=out=– wake=up=a=waiting=writer – Writer() • Motivation: Consider a shared database Wait=until=no=active=readers=or=writers Access=database – Two classes of users: Check=out=– wake=up=waiting=readers=or=writer » Readers – never modify database – State variables (Protected by a lock called “lock”): » int AR: Number of active readers; initially = 0 » Writers – read and modify database » int WR: Number of waiting readers; initially = 0 – Is using a single lock on the whole database sufficient? » int AW: Number of active writers; initially = 0 » int WW: Number of waiting writers; initially = 0 » Like to have many readers at the same time » Condition=okToRead ==NIL » Only one writer at a time » Condition=okToWrite ==NIL 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.5 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.6

Code for a Reader Code for a Writer Reader()={ Writer()={ //=First=check=self=into=system //=First=check=self=into=system lock.Acquire(); lock.Acquire(); while=((AW=+=AR)=>=0)={ //=Is=it=safe=to=write? while=((AW=+=WW)=>=0)={ //=Is=it=safe=to=read? WW++; //=No.=Active=users=exist WR++; //=No.=Writers=exist okToWrite.wait(&lock); //=Sleep=on=cond var okToRead.wait(&lock); //=Sleep=on=cond var WWXX; //=No=longer=waiting WRXX; //=No=longer=waiting } } AW++; //=Now=we=are=active! AR++; //=Now=we=are=active! lock.release(); lock.release(); //=Perform=actual=read/write=access AccessDatabase(ReadWrite); //=Perform=actual=readXonly=access AccessDatabase(ReadOnly); //=Now,=check=out=of=system lock.Acquire(); //=Now,=check=out=of=system AWXX; //=No=longer=active lock.Acquire(); if=(WW=>=0){ //=Give=priority=to=writers ARXX; //=No=longer=active okToWrite.signal(); //=Wake=up=one=writer if=(AR====0=&&=WW=>=0) //=No=other=active=readers }=else=if=(WR=>=0)={ //=Otherwise,=wake=reader okToWrite.signal(); //=Wake=up=one=writer okToRead.broadcast(); //=Wake=all=readers Why broadcast() } lock.Release(); lock.Release(); } } Why Release the 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.7 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.8 Simulation of Readers/Writers solution Simulation(2) • Consider the following sequence of operators: • Next, W1 comes along: – R1, R2, W1, R3 while=((AW=+=AR)=>=0)={ //=Is=it=safe=to=write? WW++; //=No.=Active=users=exist • On entry, each reader checks the following: okToWrite.wait(&lock); //=Sleep=on=cond var while=((AW=+=WW)=>=0)={ //=Is=it=safe=to=read? WWXX; //=No=longer=waiting WR++; //=No.=Writers=exist } okToRead.wait(&lock); //=Sleep=on=cond var AW++; WRXX; //=No=longer=waiting } • Can’t start because of readers, so go to sleep: AR++; //=Now=we=are=active! AR = 2, WR = 0, AW = 0, WW = 1 • First, R1 comes along: • Finally, R3 comes along: AR = 1, WR = 0, AW = 0, WW = 0 AR = 2, WR = 1, AW = 0, WW = 1 • Next, R2 comes along: • Now, say that R2 finishes before R1: AR = 2, WR = 0, AW = 0, WW = 0 AR = 1, WR = 1, AW = 0, WW = 1 • Now, readers make take a while to access database • Finally, last of first two readers (R1) finishes and wakes up writer: – Situation: Locks released if=(AR====0=&&=WW=>=0) //=No=other=active=readers – Only AR is non-zero okToWrite.signal(); //=Wake=up=one=writer

2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.9 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.10

Simulation(3) Questions • Can readers starve? Consider Reader()=entry code: • When writer wakes up, get: while=((AW=+=WW)=>=0)={ //=Is=it=safe=to=read? AR = 0, WR = 1, AW = 1, WW = 0 WR++; //=No.=Writers=exist • Then, when writer finishes: okToRead.wait(&lock); //=Sleep=on=cond var WRXX; //=No=longer=waiting if=(WW=>=0){======//=Give=priority=to=writers } okToWrite.signal(); //=Wake=up=one=writer AR++; //=Now=we=are=active! }=else=if=(WR=>=0)={ //=Otherwise,=wake=reader okToRead.broadcast(); //=Wake=all=readers • What if we erase the condition check in Reader exit? } ARXX; //=No=longer=active – Writer wakes up reader, so get: if=(AR====0=&&=WW=>=0) //=No=other=active=readers okToWrite.signal();= //=Wake=up=one=writer AR = 1, WR = 0, AW = 0, WW = 0 • Further, what if we turn the signal() into broadcast() • When reader completes, we are finished ARXX; //=No=longer=active okToWrite.broadcast();= //=Wake=up=one=writer • Finally, what if we use only one condition variable (call it “okToContinue”) instead of two separate ones? – Both readers and writers sleep on this variable – Must use broadcast() instead of signal()

2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.11 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.12 Administrivia • Midterm on Wednesday 9/28 5-6:30PM – 1 LeConte (Last name A-H) and 2050 VLSB (Last name I-Z) – Topics include course material through lecture 9 (today) » Lectures, project 1, homeworks, readings, textbook – Closed book, no calculators, one double-side letter-sized page of handwritten notes – Review today 6:45-8:30PM in Hearst Field Annex A1

• No office hours on Mon10/3 (covering CS 262A) BREAK • Deadlines next week: – HW2 due next Monday (10/3) – Project 1 code due next Wednesday (10/5) – Project 1 final report due next Friday (10/7) 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.13 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.14

Can we Construct Monitors from Semaphores? Construction of Monitors from Semaphores (cont’d) • Locking aspect is easy: Just use a mutex • Problem with previous try: • Can we implement condition variables this way? – P and V are commutative – result is the same no matter what order they occur Wait()==={=semaphore.P();=} Signal()={=semaphore.V();=} – Condition variables are NOT commutative – Doesn’t work: Wait()=may sleep with lock held • Does this fix the problem? Wait(Lock=lock)={ • Does this work better? lock.Release(); Wait(Lock=lock)={ semaphore.P(); lock.Release(); lock.Acquire(); semaphore.P(); } lock.Acquire(); Signal()={ } if=semaphore=queue=is=not=empty Signal()={=semaphore.V();=} semaphore.V(); – No: Condition vars have no history, semaphores have history: } » What if thread signals and no one is waiting? NO-OP – Not legal to look at contents of semaphore queue » What if thread later waits? Thread Waits – There is a race condition – signaler can slip in after lock release and » What if thread V’s and no one is waiting? Increment before waiter executes semaphore.P() » What if thread later does P? Decrement and continue • It is actually possible to do this correctly – Complex solution for Hoare scheduling in book – Can you come up with simpler Mesa-scheduled solution? 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.15 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.16 Monitor Conclusion C-Language Support for Synchronization • Monitors represent the logic of the program • C language: Pretty straightforward synchronization – Wait if necessary – Just make sure you know all the code paths out of a critical – Signal when change something so any waiting threads can proceed section

• Basic structure of monitor-based program: int Rtn()={ Proc A Stack growth lock Check and/or update lock.acquire(); while=(need=to=wait)={ … Proc B condvar.wait(); state variables if=(exception)={ Calls setjmp lock.release(); } Wait if necessary Proc C unlock return=errReturnCode; } lock.acquire do=something=so=no=need=to=wait … lock.release(); Proc D lock return=OK; Proc E } Check and/or update Calls longjmp – Watch out for setjmp/longjmp! condvar.signal(); state variables » Can cause a non-local jump out of procedure unlock » In example, procedure E calls longjmp, poping stack back to procedure B » If Procedure C had lock.acquire(), problem!

2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.17 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.18

C++ Language Support for Synchronization C++ Language Support for Synchronization (con’t) • Languages with exceptions like C++ • Must catch all exceptions in critical sections – Languages that support exceptions are problematic (easy to make a – Catch exceptions, release lock, and re-throw exception: non-local exit without releasing lock) void=Rtn()={ lock.acquire(); – Consider: try={ void=Rtn()={ … lock.acquire(); DoFoo(); … … DoFoo(); }=catch=(…)={ //=catch=exception … lock.release(); //=release=lock lock.release(); throw;= //=reXthrow=the=exception } } lock.release(); void=DoFoo()={ } … void=DoFoo()={ if=(exception)=throw=errException; … … if=(exception)=throw=errException; } … – Notice that an exception in DoFoo()=will exit without releasing the } lock! – Even Better: unique_ptr facility. See C++ Spec. » Can deallocate/free lock regardless of exit method 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.19 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.20 Java Language Support for Synchronization Java Language Support for Synchronization (con’t) • Java has explicit support for threads and thread synchronization • Java also has synchronized statements: synchronized=(object)={ … • Bank Account example: } class=Account={ private=int balance; • Since every Java object has an associated lock, this type of //=object=constructor statement acquires and releases the object’s lock on entry and public=Account=(int initialBalance)={ balance===initialBalance; exit of the body } – Works properly even with exceptions: public=synchronized=int getBalance()={ return=balance; synchronized=(object)={ } … public=synchronized=void=deposit(int amount)={ DoFoo(); balance=+==amount; … } } } void=DoFoo()={ throw=errException; • Every object has an associated lock which gets automatically } acquired and released on entry and exit from a synchronized method

2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.21 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.22

Recall: Better Implementation of Locks Java Language Support for Synchronization (cont’d 2) by Disabling • In addition to a lock, every object has a single condition variable • Key idea: maintain a lock variable and impose mutual exclusion only associated with it during operations on that variable – How to wait inside a synchronization method of block: » void=wait(long=timeout);=//=Wait=for=timeout int mylock ==FREE; » void=wait(long=timeout,=int nanoseconds);=//variant Acquire(&mylock)=– Wait=until=lock=is=free,=then= » void=wait(); grab – How to signal in a synchronized method or block: Release(&mylock)=– Unlock,=waking=up=anyone=waiting » void=notify(); //=wakes=up=oldest=waiter Acquire(int *lock)={ » void=notifyAll();=//=like=broadcast,=wakes=everyone Release(int *lock)={ disable=interrupts; disable=interrupts; – Condition variables can wait for a bounded length of time. This is if=(*lock====BUSY)={ if=(anyone=on=wait=queue)={ useful for handling exception cases: put=thread=on=wait=queue; take=thread=off=wait=queue t1===time.now(); Go=to=sleep(); Place=on=ready=queue; while=(!ATMRequest())={ //=Enable=interrupts? }=else={ wait=(CHECKPERIOD); *lock===FREE; }=else={ t2===time.new(); } if=(t2=– t1=>=LONG_TIME)=checkMachine(); *lock===BUSY; enable=interrupts; } } } – Not all Java VMs equivalent! enable=interrupts; } » Different scheduling policies, not necessarily preemptive! • Really only works in kernel – why? 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.23 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.24 In-Kernel Lock: Simulation In-Kernel Lock: Simulation

Value: 0 waiters owner READY Value: 1 waiters owner READY

Running INIT Running INIT Thread A int value===0; Thread B Thread A int value===0; Thread B Acquire()={ Acquire()={ lock.Acquire(); disable=interrupts; lock.Acquire(); lock.Acquire(); disable=interrupts; lock.Acquire(); if=(value====1)={ if=(value====1)={ … … … … put=thread=on=waitXqueue; put=thread=on=waitXqueue; critical=section; go=to=sleep()=//??= critical=section; critical=section; go=to=sleep()=//??= critical=section; … }=else={ … … }=else={ … lock.Release(); value===1; lock.Release(); lock.Release(); value===1; lock.Release(); } } enable=interrupts; enable=interrupts; } } Release()={ Release()={ disable=interrupts; disable=interrupts; if=anyone=on=wait=queue={ if=anyone=on=wait=queue={ take=thread=off=waitXqueue take=thread=off=waitXqueue Place=on=ready=queue; Place=on=ready=queue; }=else={ }=else={ value===0; value===0; } } enable=interrupts; enable=interrupts; 2/22/16 } Joseph CS162 ©UCB Spring 2016 Lec 9.25 2/22/16 } Joseph CS162 ©UCB Spring 2016 Lec 9.26

In-Kernel Lock: Simulation In-Kernel Lock: Simulation

Value: 1 waiters owner READY Value: 1 waiters owner READY

Running INIT Running Running INIT Running Thread A int value===0; Thread B Thread A int value===0; Thread B Acquire()={ Acquire()={ lock.Acquire(); disable=interrupts; lock.Acquire(); lock.Acquire(); disable=interrupts; lock.Acquire(); if=(value====1)={ if=(value====1)={ … … … … put=thread=on=waitXqueue; put=thread=on=waitXqueue; critical=section; go=to=sleep()=//??= critical=section; critical=section; go=to=sleep()=//??= critical=section; … }=else={ … … }=else={ … lock.Release(); value===1; lock.Release(); lock.Release(); value===1; lock.Release(); } } enable=interrupts; enable=interrupts; } } Release()={ Release()={ disable=interrupts; disable=interrupts; if=anyone=on=wait=queue={ if=anyone=on=wait=queue={ take=thread=off=waitXqueue take=thread=off=waitXqueue Place=on=ready=queue; Place=on=ready=queue; }=else={ }=else={ value===0; value===0; } } enable=interrupts; enable=interrupts; 2/22/16 } Joseph CS162 ©UCB Spring 2016 Lec 9.27 2/22/16 } Joseph CS162 ©UCB Spring 2016 Lec 9.28 In-Kernel Lock: Simulation In-Kernel Lock: Simulation

Value: 1 waiters owner READY Value: 1 waiters owner READY

Running INIT Running INIT Running Thread A int value===0; Thread B Thread A int value===0; Thread B Acquire()={ Acquire()={ lock.Acquire(); disable=interrupts; lock.Acquire(); lock.Acquire(); disable=interrupts; lock.Acquire(); if=(value====1)={ if=(value====1)={ … … … … put=thread=on=waitXqueue; put=thread=on=waitXqueue; critical=section; go=to=sleep()=//??= critical=section; critical=section; go=to=sleep()=//??= critical=section; … }=else={ … … }=else={ … lock.Release(); value===1; lock.Release(); lock.Release(); value===1; lock.Release(); } } enable=interrupts; enable=interrupts; } } Release()={ Release()={ disable=interrupts; disable=interrupts; if=anyone=on=wait=queue={ if=anyone=on=wait=queue={ take=thread=off=waitXqueue take=thread=off=waitXqueue Place=on=ready=queue; Place=on=ready=queue; }=else={ }=else={ value===0; value===0; } } enable=interrupts; enable=interrupts; 2/22/16 } Joseph CS162 ©UCB Spring 2016 Lec 9.29 2/22/16 } Joseph CS162 ©UCB Spring 2016 Lec 9.30

Discussion • Notice that Scheduling here involves deciding who to take off the wait queue – Could do by priority, etc.

• Same type of code works for in-kernel condition variables – The Wait queue becomes unique for each condition variable – Once again, transition to and from queues occurs with interrupts disabled BREAK

2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.31 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.32 Synchronization Summary Recall: CPU Scheduling • Semaphores: Like integers with restricted interface – Two operations: » P(): Wait if zero; decrement when becomes non-zero » V(): Increment and wake a sleeping (if exists) » Can initialize value to any non-negative value – Use separate semaphore for each constraint

• Earlier, we talked about the life-cycle of a thread • Monitors: A lock plus zero or more condition variables – Active threads work their way from Ready queue to Running to – Always acquire lock before accessing shared data various waiting queues. – Use condition variables to wait inside critical section • Question: How is the OS to decide which of several tasks to take » Three Operations: Wait(), Signal(), Broadcast() off a queue? – Obvious queue to worry about is ready queue – Others can be scheduled as well, however • Scheduling: deciding which threads are given access to resources from moment to moment

2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.33 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.34

Recall: Scheduling Assumptions Recall: Assumption – CPU Bursts • CPU scheduling big area of research in early 70’s

• Many implicit assumptions for CPU scheduling: Weighted toward small bursts – One program per user – One thread per program – Programs are independent • Clearly, these are unrealistic but they simplify the problem so it can be solved – For instance: is “fair” about fairness among users or programs? » If I run one compilation job and you run five, you get five • Execution model: programs alternate between bursts of CPU and I/O times as much CPU on many operating systems – Program typically uses the CPU for some period of time, then does I/O, • The high-level goal: Dole out CPU time to optimize some then uses CPU again desired parameters of system – Each scheduling decision is about which job to give to the CPU for use by its next CPU burst USER1 USER2 USER3 USER1 USER2 – With timeslicing, thread may be forced to give up CPU before finishing current CPU burst Time 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.35 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.36 Scheduling Policy Goals/Criteria First-Come, First-Served (FCFS) Scheduling • Minimize Response Time • First-Come, First-Served (FCFS) – Minimize elapsed time to do an operation (or job) – Also “First In, First Out” (FIFO) or “Run until done” – Response time is what the user sees: » In early systems, FCFS meant one program » Time to echo a keystroke in editor scheduled until done (including I/O) » Time to compile a program » Now, means keep CPU until thread blocks » Real-time Tasks: Must meet deadlines imposed by World • Example: Burst Time • Maximize P1 24 P2 3 – Maximize operations (or jobs) per second P3 3 – Throughput related to response time, but not identical: – Suppose processes arrive in the order: P1 , P2 , P3 » Minimizing response time will lead to more switching than if you The Gantt Chart for the is: only maximized throughput – Two parts to maximizing throughput P1 P2 P3 » Minimize overhead (for example, context-switching) » Efficient use of resources (CPU, disk, memory, etc) 0 24 27 30 • Fairness – Waiting time for P1 = 0; P2 = 24; P3 = 27 – Share CPU among users in some equitable way – Average waiting time: (0 + 24 + 27)/3 = 17 – Fairness is not minimizing average response time: – Average Completion time: (24 + 27 + 30)/3 = 27 » Better average response time by making system less fair • Convoy effect: short process behind long process 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.37 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.38

FCFS Scheduling (Cont.) Round Robin (RR) • Example continued: • FCFS Scheme: Potentially bad for short jobs!

– Suppose that processes arrive in order: P2 , P3 , P1 – Depends on submit order Now, the Gantt chart for the schedule is: – If you are first in line at supermarket with milk, you don’t care who is behind you, on the other hand… P P P 2 3 1 • Round Robin Scheme 0 3 6 30 – Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds – Waiting time for P1 = 6; P2 = 0; P3 = 3 – Average waiting time: (6 + 0 + 3)/3 = 3 – After quantum expires, the process is preempted and added to the end of the ready queue. – Average Completion time: (3 + 6 + 30)/3 = 13 – n processes in ready queue and time quantum is q ! • In second case: » Each process gets 1/n of the CPU time – average waiting time is much better (before it was 17) » In chunks of at most q time units – Average completion time is better (before it was 27) » No process waits more than (n-1)q time units • FIFO Pros and Cons: • Performance – Simple (+) – q large ! FCFS – Short jobs get stuck behind long ones (-) – q small ! Interleaved (really small ! hyperthreading?) » Safeway: Getting milk, always stuck behind cart full of small items. Upside: – q must be large with respect to , otherwise overhead is get to read about space aliens! too high (all overhead) 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.39 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.40 Example of RR with Time Quantum = 20 Round-Robin Discussion • Example: Process Burst Time • How do you choose time slice? P1 53 – What if too big? P 8 2 » Response time suffers P3 68 P4 24 – What if infinite (")? – The Gantt chart is: » Get back FIFO P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 – What if time slice too small? » Throughput suffers! 0 20 28 48 68 88 108 112 125 145 153 – Waiting time for P1=(68-20)+(112-88)=72 • Actual choices of timeslice: P2=(20-0)=20 – Initially, UNIX timeslice one second: P3=(28-0)+(88-48)+(125-108)=85 » Worked ok when UNIX was used by one or two people. P4=(48-0)+(108-68)=88 » What if three compilations going on? 3 seconds to echo each keystroke! – Average waiting time = (72+20+85+88)/4=66¼ – Average completion time = (125+28+153+112)/4 = 104½ – In practice, need to balance short-job performance and long-job throughput: • Thus, Round-Robin Pros and Cons: » Typical time slice today is between 10ms – 100ms – Better for short jobs, Fair (+) » Typical context-switching overhead is 0.1ms – 1ms – Context-switching time adds up for long jobs (-) » Roughly 1% overhead due to context-switching 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.41 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.42

Comparisons between FCFS and Round Robin Earlier Example with Different Time Quantum • Assuming zero-cost context-switching time, is RR always better than FCFS? P2 P4 P1 P3 Best FCFS: [8] [24] [53] [68] • Simple example: 10 jobs, each take 100s of CPU time 0 8 32 85 153 RR scheduler quantum of 1s All jobs start at the same time Quantum P1 P2 P3 P4 Average • Completion Times: Job # FIFO RR Best FCFS 32 0 85 8 31¼ 1 100 991 Q = 1 84 22 85 57 62 Q = 5 82 20 85 58 61¼ 2 200 992 Wait Q = 8 80 8 85 56 57¼ … … … Time Q = 10 82 10 85 68 61¼ 9 900 999 Q = 20 72 20 85 88 66¼ 10 1000 1000 Worst FCFS 68 145 0 121 83½ – Both RR and FCFS finish at the same time Best FCFS 85 8 153 32 69½ – Average response time is much worse under RR! Q = 1 137 30 153 81 100½ Q = 5 135 28 153 82 99½ » Bad when all jobs same length Completion Q = 8 133 16 153 80 95½ • Also: Cache state must be shared between all jobs with RR but can Time be devoted to each job with FIFO Q = 10 135 18 153 92 99½ Q = 20 125 28 153 112 104½ – Total time for RR longer even for zero-cost switch! Worst FCFS 121 153 68 145 121¾ 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.43 2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.44 Summary • Semaphores: Like integers with restricted interface – Two operations: » P():=Wait if zero; decrement when becomes non-zero » V():=Increment and wake a sleeping task (if exists) » Can initialize value to any non-negative value – Use separate semaphore for each constraint • Monitors: A lock plus one or more condition variables – Always acquire lock before accessing shared data – Use condition variables to wait inside critical section » Three Operations: Wait(), Signal(), and Broadcast() • Scheduling: selecting a waiting process from the ready queue and allocating the CPU to it • FCFS Scheduling: – Run threads to completion in order of submission – Pros: Simple – Cons: Short jobs get stuck behind long ones • Round-Robin Scheduling: – Give each thread a small amount of CPU time when it executes; cycle between all ready threads – Pros: Better for short jobs – Cons: Poor when jobs are same length

2/22/16 Joseph CS162 ©UCB Spring 2016 Lec 9.45