An aside on concurrency

• Timing and sequence of events are key concurrency issues • We will study classical OS concurrency issues, including Processes and implementation and use of classic OS mechanisms to support Non-Preemptive Scheduling concurrency • In a later course on parallel programming may revisit this material Otto J. Anshus • Later course on distributed systems you may want to use formal University of Tromsø, University of Oslo tools to understand and model timing and sequencing better Tore Larsen • Single CPU computers are designed to uphold a simple and rigid model of sequencing and timing. ”Under the hood,” even University of Tromsø single CPU systems are distributed in nature, and are carefully Kai Li organized to uphold strict external requirements Princeton University 28.08.03 1 28.08.03 2

Process Supporting and Using Processes

• An instance of a program under execution • Multiprogramming – Program specifying (logical) control-flow (thread) – Supporting concurrent execution (parallel or transparently interleaved) – Data of multiple processes (or threads). – Private address space – Achieved by - or context switching, switching the CPU(s) back – Open files and forth among the individual processes, keeping track of each process’ progress – Running environment • Concurrent programs • The most important operating system concept – Programs (or threads) that exploit multiprogramming for some purpose • Used for supporting the concurrent execution of independent or (e.g. performance, structure) cooperating program instances – Independent or cooperating • Used to structure applications and systems – Operating systems is important application area for concurrent programming. Many others (event driven programs, servers, ++)

28.08.03 3 28.08.03 4

1 Implementing processes Primitives of Processes

• Os needs to keep track of all processes • Creation and termination – Keep track of it’s progress – , , , kill – Parent process • Signals – Metadata (priorities etc.) used by OS – Action, Return, Handler – Memory management – File management • Operations • Process table with one entry (Process Control Block) – block, yield per process • Synchronization • Will also align processes in queues – We will talk about this later

28.08.03 5 28.08.03 6

fork (UNIX) fork, exec, wait, kill

• Spawns a new process (with new PID) • Return value tested for error, zero, or positive • Called in parent process • Zero, this is the • Returns in parent and child process – Typically redirect standard files, and – Call Exec to load a new program instead of the old • Return value in parent is child’s PID • Positive, this is the parent process • Return value in child is ’0’ • Wait, parent waits for child’s termination • Child gets duplicate, but separate, copy of parent’s – Wait before corresponding , parent blocks until exit user-level virtual address space – Exit before corresponding wait, child becomes zombie (un-dead) until • Child gets identical copy of parent’s open file wait descriptors • Kill, specified process terminates

28.08.03 7 28.08.03 8

2 When may OS switch contexts? Context Switching Issues • Performance – Should be no more than a few microseconds • Only when OS runs – Most time is spent SAVING and RESTORING the context of processes • Events potentially causing a context switch: • Less processor state to save, the better – Process created ( ) Non-Preemptive fork – Pentium has a multitasking mechanism, but SW can be faster if it saves – Process exits (exit) scheduling less of the state – Process blocks implicitly (I/O, IPC) • How to save time on the copying of context state? – Process blocks explicitly (yield) – Re-map (address) instead of copy (data) – User or System Level Trap •HW • Where to store Kernel data structures “shared” by all processes • SW: User level • Memory Preemptive • Exception scheduling • How to give processes a fair share of CPU time – Kernel preempts current process • Preemptive scheduling, time-slice defines maximum time interval • Potential scheduling decision at “any of above” between scheduling decisions • +“Timer” to be able to limit running time of processes

28.08.03 9 28.08.03 10

Example Process State Transitions PC U s e r L e v e l P r o c e s s e s Process State Transition of P1 P2 P3 P4 MULTIPROGRAMMING •Uniprocessor: Interleaving Non-Preemptive Scheduling KERNEL (“pseudoparallelism”) Trap Service •Multiprocessor: Overlapping (“true Handler paralellism”) Terminate Trap Return P2 BlockedQueue (call scheduler) Handler Scheduler Terminate Running P3 P4 (call scheduler) dispatch Scheduler ReadyQueue Block for resource Scheduler Running (call scheduler) Current P1 dispatch Yield Dispatcher PCB’s Block for resource (call scheduler) (call scheduler) Create Yield a process Create (call scheduler) Ready a process Blocked Ready Memory resident part Blocked

Resource becomes available Resource becomes available (move to ready queue) 28.08.03 (move to ready queue) 11 28.08.03 12

3 Stacks Scheduler • Remember: We have only one copy of the Kernel in memory => all processes “execute” the same kernel code • Non-preemptive scheduler invoked by explicit block or => Must have a kernel stack for each process yield calls, possibly also fork and exit • Used for storing parameters, return address, locally created • The simplest form variables in frames or activation records Scheduler: • Each process save current process state (store to PCB) – user stack choose next process to run – kernel stack dispatch (load PCB and run) • always empty when process is in user mode executing • Does this work? instructions • PCB (something) must be resident in memory • Remember the stacks • Does the Kernel need its own stack(s)?

28.08.03 13 28.08.03 14

More on Scheduler • Should the scheduler use a special stack? Win NT Idle –Yes, • because a user process can overflow and it would require another stack to deal with stack overflow • No runable thread exists on the processor • because it makes it simpler to pop and push to rebuild a process’s – Dispatch Idle Process (really a thread) context • Must have a stack when booting... • Idle is really a dispatcher in the kernel • Should the scheduler simply be a “kernel process”? – Enable interrupt; Receive pending interrupts; Disable interrupts; – You can view it that way because it has a stack, code and its data – Analyze interrupts; Dispatch a thread if so needed; structure – Check for deferred work; Dispatch thread if so needed; – This process always runs when there is no user process • “Idle” process – Perform power management; – In kernel or at user level?

28.08.03 15 28.08.03 16

4 Threads and Processes Project OpSys Trad. Threads

Process Processes in individual address spaces

Kernel threads User Level Threads Kernel Address Space

28.08.03 Kernel Level 17 28.08.03 18

Where Should PCB Be Saved? Job swapping

• Save the PCB on user stack • The processes competing for resources may have combined – Many processors have a special instruction to do it demands that results in poor system performance efficiently • Reducing the degree of multiprogramming by moving some – But, need to deal with the overflow problem processes to disk, and temporarily not consider them for execution may be a strategy to enhance overall system – When the process terminates, the PCB vanishes performance • Save the PCB on the kernel heap data structure – From which states(s), to which state(s)? Try extending the following – May not be as efficient as saving it on stack examples using two suspended states. – But, it is very flexible and no other problems • The term is also used in a slightly different setting, see MOS Ch. 4.2 pp. 196-197

28.08.03 19 28.08.03 20

5 Add Job Swapping to Job Swapping, ii State Transition Diagram Terminate (call scheduler) Swap in Partially executed Swap out Scheduler Running dispatch swapped-out processes Block for resource Yield (call scheduler) Create (call scheduler) a process Ready Ready Queue CPU Blocked Terminate

Resource becomes available I/O Waiting (move to ready queue) I/O Swap out queues Swapped Swap in

28.08.03 21 28.08.03 22

I/O Multiplexing: Concurrent Programming w/ Processes More than one State Machine per Process

• Clean programming model • select blocks for any of multiple events – File tables are shared • Handle (one of the events) that unblocks select – User address space is private – Advance appropriate state machine • Processes are protected from each other • Repeat • Sharing requires some sort of IPC (InterProcess Communication) • Slower execution – Process switch, process control expensive – IPC expensive

28.08.03 23 28.08.03 24

6 Concurrent prog. w/ I/O Multiplexing

• Establishes several control flows (state machines) in one process • Uses select • Offers application programmer more control than processor model (How?) • Easy sharing of data among state machines • More efficient (no process switch to switch between control flows in same process) • Difficult programming

28.08.03 25

7