Performance of round-robin : LAST LECTURE ➜ Average waiting time: not optimal ➜ Performance depends heavily on size of time-quantum: Scheduling Algorithms: - too short: overhead for becomes too ➜ Slide 1 FIFO Slide 3 expensive ➜ - too large: degenerates to FCFS policy ➜ Shortest remaining job next - rule of thumb: about 80% of all bursts should be shorter than ➜ Highest Response Rate Next (HRRN) 1 time quantum ➜ no starvation

ROUND-ROBIN PRIORITIES quantum=1: ➜ A Each is associated with a priority B ➜ Basic mechanism to influence scheduler decision: C D • Scheduler will always choose a thread of higher priority over E one of lower priority 0 5 10 15 20 • Implemented via multiple FCFS ready queues (one per Slide 2 quantum=4: Slide 4 priority) A ➜ Lower-priority may suffer starvation B C • adapt priority based on thread’s age or execution history D E ➜ Priorities can be defined internally or externally 0 5 10 15 20 - internal: e.g., memory requirements, I/O bound vs CPU ➜ Scheduled thread is given a time slice bound ➜ Running thread is preempted upon clock interrupt, running - external: e.g., importance of thread, importance of user thread is returned to ready queue

ROUND-ROBIN 1 PRIORITIES 2 Feedback scheduling:

RQ0 Release Admit Processor Priority queueing:

Queue Runable processes headers RQ1 Release

Processor Priority 4 (Highest priority)

Slide 5 Slide 7 ¥ ¥ ¥ Priority 3 ¥ ¥ ¥ Priority 2 RQn Release

Processor Priority 1 (Lowest priority)

Priorities influence access to resources, but do not guarantee a certain fraction of the resource (CPU etc)!

Feedback scheduling:

0 5 10 15 20

A ➜ gets “lottery tickets” for various resources Feedback B q = 1 C ➜ more lottery tickets imply better access to resource D E Advantages: Slide 6 A Slide 8 Feedback i B ➜ Simple q = 2 C D ➜ Highly responsive E ➜ Allows cooperating processes/threads to implement individual ➜ Penalize jobs that have been running longer scheduling policy (exchange of tickets) i ➜ q =2 : longer time slice for lower priority (reduce starvation)

PRIORITIES 3 LOTTERY SCHEDULING 4 Example (taken from Embedded Systems Programming: Four processes a running concurrently ➜ Process A: 15% of CPU time Note: UNIX traditionally uses counter-intuitive priority ➜ Process B: 25% of CPU time representation (higher value = less priority) ➜ Process C: 5% of CPU time Bands: ➜ Process D: 55% of CPU time ➜ Decreasing order of priority Slide 9 How many tickets should each process get to achieve this? Slide 11 • Swapper Number of tickets in proportion to CPU time, e.g., if we have • Block I/O device control • 20 tickets overall File manipulation • Character I/O device control ➜ Process A: 15% of tickets: 3 • User processes ➜ Process B: 25% of tickets: 5 ➜ Process C: 5% of tickets: 1 ➜ Process D: 55% of tickets: 11

TRADITIONAL UNIX SCHEDULING (SVR3, 4.3 BSD)

Objectives: Advantages: ➜ support for time sharing ➜ ➜ good response time for interactive users relatively simple, effective ➜ ➜ support for low-priority background jobs works well for single processor systems Disadvantages: Slide 10 Strategy: Slide 12 ➜ ➜ Multilevel feedback using round robin within priorities significant overhead in large systems (recomputing priorities) ➜ ➜ Priorities are recomputed once per second response time not guaranteed ➜ non-preemptive kernel: lower priority process in kernel mode • Base priority divides all processes into fixed bands of priority can delay high-priority process levels • Priority adjustment capped to keep processes within bands ➜ Favours I/O-bound over CPU-bound processes

TRADITIONAL UNIX SCHEDULING (SVR3,4.3BSD) 5 NON-PREEMPTIVEVS PREEMPTIVE KERNEL 6 Local memory CPU Complete system NON-PREEMPTIVEVS PREEMPTIVE KERNEL M M M M C C C C C C C C C+ M C+ M C+ M ➜ kernel data structures have to be protected C M C Inter- C M Shared C ➜ connect Internet basically, two ways to solve the problem: memory M C C M C C • Non-preemptive: disable all (most) interrupts while in kernel C C C C C C C C C+ M C+ M C+ M mode, so no other thread can get into kernel mode while in M M M M critial section (a) (b) (c) Slide 13 - Priority inversion possible Slide 15 - Coarse grained (b) Loosely coupled multiprocessor - Works only for uniprocessor • Each processor has its own memory and I/O channels • Preemptive: just lock kernel data structure which is currently • Generally called a distributed memory multiprocessor modified (c) Distributed System - More fine-grained • complete computer systems connected via wide area - Introduces additional overhead, can reduce throughput network • communicate via message passing

MULTIPROCESSOR SCHEDULING

What kind of systems and applications are there? PARALLELISM Classification of Multiprocessor Systems: Independent parallelism: Local memory CPU Complete system ➜ Separate applications/jobs M M M M ➜ No synchronization C C C C C C C C C+ M C+ M C+ M ➜ Parallelism improves throughput, responsiveness C M C Inter- C M Shared C connect Internet ➜ Slide 14 memory M C C M Slide 16 Parallelism doesn’t affect execution time of (single threaded) C C C C C C programs C C C C C+ M C+ M C+ M M M M M Coarse and very coarse-grained parallelism: (a) (b) (c) ➜ Synchronization among processes is infrequent (a) Tightly coupled multiprocessing ➜ Good for loosely coupled multiprocessors • Processors share main memory, controlled by single • Can be ported to multiprocessor with little change , called symmetric multi-processor (SMP) system

MULTIPROCESSOR SCHEDULING 7 PARALLELISM 8 Medium-grained parallelism: ASSIGNMENTOFTHREADSTOPROCESSORS ➜ Parallel processing within a single application ➜ • Application runs as multithreaded process Treat processors as a pooled resource and assign threads to processors on demand ➜ Threads usually interact frequently ➜ Good for SMP systems • Permanently assign threads to a processor ➜ Unsuitable for loosely-coupled systems - Dedicate short-term queue for each processor ✔ Slide 17 Slide 19 Low overhead Fine-grained parallelism: ✖ Processor could be idle while another processor has a ➜ Highly parallel applications backlog • e.g., parallel execution of loop iterations • Dynamically assign process to a processor ✖ ➜ Very frequent synchronisation higher overhead ✖ poor locality ➜ Works only well on special hardware ✔ better load balancing • vector computers, symmetric multithreading (SMT) hardware

MULTIPROCESSOR SCHEDULING

Multiprocessor Scheduling: ASSIGNMENTOFTHREADSTOPROCESSORS Which process should be run next and where? Who decides which thread runs on which processor? We discuss: Master/slave architecture: ➜ Tightly coupled multiprocessing ➜ Key kernel functions always run on a particular processor ➜ Very coarse to medium grained parallelism Slide 18 Slide 20 ➜ Master is responsible for scheduling ➜ Homogeneous systems (all processors have same specs, access ➜ Slave sends service request to the master to devices) ✔ simple Design Issues: ✔ one processor has control of all resources, no synchronisation ➜ How to assign processes/threads to the available processors? ✖ Failure of master brings down whole system ➜ Multiprogramming on individual processors? ✖ Master can become a performance bottleneck ➜ Which scheduling strategy ? ➜ Scheduling dependend processes

ASSIGNMENTOFTHREADSTOPROCESSORS 9 ASSIGNMENTOFTHREADSTOPROCESSORS 10 LOAD SHARING: TIME SHARING

0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 CPU A 5 6 7 A 5 6 7 CPU 12 8 9 10 11 CPU 4 8 9 10 11 goes idle 8 9 10 11 goes idle 12 13 14 15 12 13 14 15 B 13 14 15

Priority Priority Priority Peer architecture: 7 A B C 7 B C 7 C 6 D E 6 D E 6 D E 5 F 5 F 5 F ➜ Operating system can execute on any processor 4 4 4 3 G H I 3 G H I 3 G H I ➜ 2 J K 2 J K 2 J K Each processor does self-scheduling 1 1 1 ➜ 0 L M N 0 L M N 0 L M N Slide 21 Complicates the operating system Slide 22 (a) (b) (c)

- Make sure no two processors schedule the same thread ➜ Load is distributed evenly across the processors - Synchronise access to resources ➜ Use global ready queue ➜ Proper symmetric multiprocessing • Threads are not assigned to a particular processor • Scheduler picks any ready thread (according to scheduling policy) • Actual scheduling policy less important than on uniprocessor ➜ No centralized scheduler required

Disadvantages of time sharing: ➜ Central queue needs mutual exclusion • Potential race condition when several CPUs are trying to pick a thread from ready queue • May be a bottleneck blocking processors Slide 23 ➜ Preempted threads are unlikely to resume execution on the same processor • Cache use is less efficient, bad locality ➜ Different threads of same process unlikely to execute in parallel • Potentially high intra-process communication latency

ASSIGNMENTOFTHREADSTOPROCESSORS 11 LOAD SHARING: TIME SHARING 12

Combined time and space sharing: ➜ Simultaneous scheduling of threads that make up a single process Thread A running 0 ➜ Useful for applications where performance severely degrades when any part of the application is not running A B A B A B CPU 0 0 0 0 0 0 0 • e.g., often need to synchronise with each other Request 2 Request 1 CPU Slide 24 Reply 1 Reply 2 Slide 26 0 1 2 3 4 5 CPU 1 B1 A1 B1 A1 B1 A1 0 A0 A1 A2 A3 A4 A5 C 1 B0 B1 B2 0 C1 C2 Time 0 100 200 300 400 500 600 2 D0 D1 D2 D3 D4 E0

Time 3 E1 E2 E3 E4 E5 E6 slot 4 A0 A1 A2 A3 A4 A5

5 B0 B1 B2 C0 C1 C2

6 D0 D1 D2 D3 D4 E0

7 E1 E2 E3 E4 E5 E6

SMP SUPPORTINMODERNGENERALPURPOSE OS’Sa LOAD SHARING: SPACE SHARING ➜ Solaris 8.0: up to 128 ➜ 2.4: up to 32 Scheduling multiple threads of same process across multiple ➜ Windows 2000 Data Center: up to 32 CPUs ➜ OS/2 Warp: up to 64 8-CPU partition 0 1 2 3 4 5 6 7 4-CPU partition SMP Scheduling in Linux 2.4: Slide 25 8 9 10 11 12 13 14 15 Slide 27 ➜ 6-CPU partition 16 17 18 19 20 21 22 23 tries to schedule process on same CPU

24 25 26 27 28 29 30 31 ➜ if the CPU busy, assigns it to an idle CPU

Unassigned CPU 12-CPU partition ➜ otherwise, checks if process priority allows interrupt on preferred ➜ statically assigned to CPUs at creation time (figure) or CPU ➜ ➜ dynamic assignment using a central server uses spin locks to protect kernel data structures

a(source http://www.2cpu.com)

GANG SCHEDULING 13 WINDOWS 2000 SCHEDULING 14 WINDOWS 2000 SCHEDULING REAL-TIME SYSTEMS ➜ priority driven, preemptive scheduling system What is a real-time system? ➜ if thread with higher priority becomes ready to run, current thread is preempted A real-time system is a system whose correctness includes its ➜ scheduled at thread granularity response time as well as its functional correctness. ➜ priorities: 0 (zero-page thread), 1-15 (variable levels), 16-31 Slide 28 (realtime levels — soft) Slide 30 What is a hard real-time system? ➜ each thread has a quantum value, clock-interrupt handler A real-time system with guaranteed worst case response deducts 3 from running thread quantum times. ➜ default value of quantum: 6 Windows 2000 Professional, 36 on ➜ Hard real-time systems fail if deadlines cannot be met Windows 2000 Server ➜ Service of soft real-time systems degrades if deadlines cannot ➜ most wait-operations result in temporary priority boost, favouring be met IO-bound threads

REALTIME SYSTEMS Real-time systems: Overview: ➜ no clear separation ➜ system may meet hard deadline of one application, but not of Slide 29 ➜ Real time systems Slide 31 other ➜ depending on application, time-scale may vary from - Hard and soft real time systems microseconds to seconds - Real time scheduling ➜ most systems have some real-time requirements - A closer look at some real time operating systems

REAL-TIME SYSTEMS 15 REAL-TIME SYSTEMS 16 CHARACTERISTICS OF REAL-TIMEOPERATINGSYSTEMS Soft Real-time Applications: Deterministic: How long does it take to acknowledge ➜ Many multi-media apps interrupt? ➜ e.g., DVD or MP3 player ➜ Operations are performed at fixed, predetermined times or ➜ Many real-time games, networked games within predetermined time intervals Hard Real-time Applications: ➜ Depends on ➜ - response time of system for interrupts Slide 32 Control of laboratory experiments Slide 34 - capacity of system ➜ Embedded devices ➜ Cannot be fully deterministic when processes are competing for ➜ Process control plants resources ➜ Robotics ➜ Requires preemptive kernel ➜ Air traffic control ➜ Telecommunications Responsive: How long does it take to service the interrupt? ➜ Military command and control systems ➜ Includes amount of time to begin execution of the interrupt ➜ Includes the amount of time to perform the interrupt

CHARACTERISTICS OF REAL-TIMEOPERATINGSYSTEMS

User control: User has much more control compared to Hard real-time systems: ordinary OS’s ➜ often lack full functionality of modern OS ➜ User specifies priority ➜ secondary memory usually limited or missing ➜ Specify paging ➜ data stored in short term or read-only memory ➜ Which processes must always reside in main memory ➜ no time sharing Slide 33 Slide 35 ➜ Disks algorithms to use Modern operating systems provide support for soft real-time ➜ Rights of processes applications Reliability: Failure, loss, degradation of performance may Hard real-time OS either specially tailored OS, modular have catastrophic consequences systems, or customized version of general purpose OS. ➜ Attempt either to correct the problem or minimize its effects while continuing to run ➜ Most critical, high priority tasks execute

CHARACTERISTICSOFREAL-TIMEOPERATINGSYSTEMS 17 CHARACTERISTICSOFREAL-TIMEOPERATINGSYSTEMS 18 REAL-TIME SCHEDULING CHARACTERISTICS OF REAL-TIMEOPERATINGSYSTEMS Preemptive round-robin: General purpose OS objectives like Request from a ➜ speed real-time process Real-time process added to Slide 36 ➜ fairness Slide 38 run queue to await its next slice ➜ maximising throughput Real-time ➜ minimising average response time Process 1 Process 2 Process n process Clock are not priorities in real time OS’s! tick Scheduling time

Features of real-time operating systems: REAL-TIME SCHEDULING ➜ Fast context switch ➜ Small size Non-preemptive priority: ➜ Ability to respond to external interrupts quickly Request from a ➜ Predictability of system performance! real-time process Real-time process added Slide 37 Slide 39 to head of run queue ➜ Use of special sequential files that can accumulate data at a

fast rate Real-time Current process ➜ Preemptive scheduling based on priority process ➜ Minimization of intervals during which interrupts are disabled Current process Scheduling time blocked or completed ➜ Delay tasks for fixed amount of time

REAL-TIME SCHEDULING 19 REAL-TIME SCHEDULING 20 REAL-TIME SCHEDULING

Classes of Algorithms: ➜ Static table-driven REAL-TIME SCHEDULING - suitable for periodic tasks - input: periodic arrival, ending and execution time Preemption points: - output: schedule that allows all processes to meet

Request from a requirements (if at all possible) real-time process Wait for next - determines at which points in time a task begins execution Slide 40 preemption point Slide 42 ➜ Static priority-driven preemptive - static analysis determines priorities Real-time Current process process - traditional priority-driven scheduler is used Preemption ➜ Dynamic planning-based Scheduling time point - feasibility to integrate new task is determined dynamically ➜ Dynamic best effort - no feasibility analysis - typically aperiodic, no static analysis possible - does its best, procs that missed deadline aborted

When are periodic events schedulable?

➜ Pi: period with which event i occurs

➜ Ci: CPU time required to handle event i REAL-TIME SCHEDULING A set of events e1 to em is schedulable if Immediate preemptive: m C Request from a i X ≤ 1 real-time process Pi Slide 41 Real-time process preempts current Slide 43 i=1 process and executes immediately Example: Real-time Current process process ➜ three periodic events with periods of 100, 200, and 500msecs ➜ require 50, 30 , and 100msec of CPU time Scheduling time ➜ schedulable? 50 30 100 + + =0.5+0.15+0.2 ≤ 1 100 200 500

REAL-TIME SCHEDULING 21 DEADLINE SCHEDULING 22 DEADLINE SCHEDULING

DEADLINE SCHEDULING Earliest deadline first strategy is provably optimal. It ➜ minimises number of tasks that miss deadline Current systems often try to provide real-time support by ➜ if there is a schedule for a set of tasks, earliest deadline first will ➜ starting real time tasks are quickly as possible find it ➜ speeding up interrupt handling and task dispatching Slide 44 Slide 46 Earliest deadline first Not necessarily appropriate, since ➜ can be used for dynamic or static scheduling ➜ real-time applications are not concerned with speed but with ➜ works with starting or completion deadline reliably completing tasks ➜ for any given preemption strategy ➜ priorities alone are not sufficient - starting deadlines are given: nonpreemptive - completion deadline: preemptive

Two tasks:

DEADLINE SCHEDULING ➜ Sensor A: ➜ Sensor B: • data arrives every 20ms • data arrives every 50ms Additional information used: • processing takes 10ms • processing takes 25ms ➜ Ready time - sequence of times for periodic tasks, may or may not be Scheduling decision every 10ms known statically Task Arrival Time Execution Time Deadline ➜ Slide 45 Starting deadline Slide 47 A(1) 0 10 20 ➜ Completion deadline A(2) 20 10 40 ➜ Processing time A(3) 40 10 60 - may or may not be known, approximated ...... ➜ Resource requirements . . . . B(1) 0 25 50 ➜ Priority B(2) 50 25 100 ➜ Subtask scheduler ......

DEADLINE SCHEDULING 23 DEADLINE SCHEDULING 24 Periodic threads with completion deadline: RATE MONOTONIC SCHEDULING B1 B1 deadline deadline A1 A2 A3 A4 A5 Works for processes which deadline deadline deadline deadline deadline ➜ are periodic Arrival times, execution A1 A2 A3 A4 A5 ➜ need the same amount of CPU time on each burst times, and deadlines B1 B2 0 1020 3040 50 6070 8090 100 Time(ms) ➜ optimal static scheduling algorithm ➜ guaranteed to succeed if A1 B1 A2 B1 A3 B2 A4 B2 A5 B2 Fixed-priority scheduling; m Slide 48 A has priority Slide 50 Ci 1 A1 A2 B1 A3 A4 A5, B2 ≤ m ∗ (2 m − 1) (missed) X Pi i=1

Fixed-priority scheduling; B1 A2 A3 B2 A5 B has priority for m = 1,10,100,1000: 1, 0.7, 0.695, 0.693 A1 A2 B1 A3 A4 A5, B2 (missed) (missed) Works by

Earliest deadline scheduling A1 B1 A2 B1 A3 B2 A4 B2 A5 ➜ assigning priorities to threads on the basis of their periods using completion deadlines ➜ A1 A2 B1 A3 A4 A5, B2 highest-priority task is the one with the shortest period

Aperiodic threads with starting deadline:

0 1020 3040 50 6070 8090 100 110 120

Arrival times ABCDE

Requirements Starting deadline BCEDA Periodic task timing diagram:

Arrival times ABCDE Cycle 1 Cycle 2

Earliest Service ACED Slide 49 deadline Slide 51 P ProcessingIdle Processing Starting deadline B (missed) C E D A Time task P execution time C Arrival times ABCDE

Earliest task P period T deadline Service B C E D A with unforced idle times Starting deadline BCEDA

Arrival times ABCDE

First-come Service first-served ACD (FCFS) Starting deadline B (missed)CDA E (missed)

RATE MONOTONIC SCHEDULING 25 RATE MONOTONIC SCHEDULING 26 Task set with RMS:

High Highest rate and highest priority task

WHYUSE RMS?

Despite some obvious disadvantages of RMS over EDF, RMS is sometimes used Slide 52 Slide 54 ➜ it has a lower overhead

Priority ➜ simple ➜ in pratice, performance similar Lowest rate and ➜ greater stability, predictability lowest priority task

Low Rate (Hz)

LINUX 2.4 SCHEDULING — SOFT REAL-TIME SUPPORT - A: 15/30, B: 15/40, C: 5/50 ➜ User assigns static priority to real time processes (1-99), never changed by scheduler A1 A5 ➜ A A2 A3 A4 Conventional processes have dynamic priority, always lower

B B1 B2 B3 B4 than real time processes - sum of base priority and C C1 C2 C3 - number of clock ticks left of quantum for current epoch Slide 53 Slide 55 ➜ Scheduling classes RMS B1 A2 B2 Failed A1 • SCHED FIFO: First-in-first-out real-time threads

EDF A1 B1 A2 B2A3 B3 A4 A5 B4 • SCHED RR: Round-robin real-time threads C1 C2 C3 • SCHED OTHER: Other, non-real-time threads ➜ Within each class multiple priorities may be used 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 ➜ Deadlines cannot be specified, no guarantees given Time (msec) ➜ Due to non-preemptive kernel, latency can be too high for real-time systems

WHYUSE RMS? 27 LINUX 2.4 SCHEDULING — SOFT REAL-TIME SUPPORT 28 Linux scheduling: SVR4 dispatch queues:

A minimum Highest priority B middle -4 Waiting for disk I/O DBCA Process waiting C middle -3 Waiting for disk buffer in kernel mode D maximum -2 Waiting for terminal input -1 Waiting for terminal output Slide 56 Slide 58 (a) Relative thread priorities (b) Flow with FIFO scheduling 0 Waiting for child to exist 0 User priority 0 1 User priority 1 Process waiting 2 User priority 2 in user mode 3 User priority 3 DBCB C A

Lowest priority Process queued on priority level 3 (c) Flow with RR scheduling

WINDOWS 2000 SCHEDULING UNIX SVR4 SCHEDULING ➜ Priorities organized into two bands or classes ➜ Highest preference to real-time processes • Real-time Slide 57 ➜ Next-highest to kernel-mode processes Slide 59 • Variable ➜ Lowest preference to other user-mode processes ➜ Priority-driven preemptive scheduler ➜ Real time processes may block system services ➜ also, no deadlines, no guarantees

UNIX SVR4 SCHEDULING 29 WINDOWS 2000 SCHEDULING 30 Priority Problem: 31 ➜ in real life applications, many tasks are not always periodic. Next thread to run ➜ static priorities may not work System If real time threads run periodically with same length, fixed priorities 24 priority is no problem:

16 Slide 60 Slide 62

aaaaaaa User bbbbbbb priorities 8

- a: periodic real time thread, highest priority 1 Zero page thread 0 - b: periodic real time thread - various different low priority tasks (e.g., user I/O) Idle thread

15 14 13 12 11 But if frequency of high priority task increases temporarily, 10 system may encounter overload: 9 8 7 Slide 61 6 highest Slide 63 5 above normal 4 base priority normal 3 below normal 2 lowest - system not able to respond 1 - system may not be able to perform requested service 0

Process Thread’s Base Thread’s Dynamic Priority Priority Priority

WINDOWS 2000 SCHEDULING 31 WINDOWS 2000 SCHEDULING 32 Example: Network interface control driver, requirements: Basic Idea: “simulation” of periodic behaviour of thread by assigning ➜ avoid if possible to drop packets ➜ ➜ definitely avoid overload realtime priority: Pr ➜ background priority: Pb If receiver thread get highest priority permanently, system ➜ execution budget: E may go into overload if incoming rate exceeds a certain ➜ replenishment interval: R value. Slide 64 Slide 66 ➜ expected frequency: packet once every 64µs to thread. ➜ ➜ CPU time required to process packet: 25µs Whenever thread exhausts execution budget, priority is set to background priority Pb ➜ 32-entry ring buffer, max 50% full ➜ When thread blocks after n units, n will be added to execution budget R units after execution started packet every 64µs receiver ➜ When execution budget is incremented, thread priority is reset thread 25µs/packet to Pr

Example: SPORADIC SCHEDULING ➜ execution budget: 5 ➜ replenishment interval: 13 POSIX standard to handle ➜ aperiodic or sporadic events Thread does not block: ➜ with static priority, preemptive scheduler Slide 65 Slide 67 Implemented in hard real-time systems such as QNX, some 5 real-time versions of Linux, real-time specification for Java budget (RTSJ)(partially) replenishment interval Can be used to avoid overloading in a system 5 10 15 time

SPORADIC SCHEDULING 33 SPORADIC SCHEDULING 34 Thread blocks:

5

budget replenishment interval HARD REAL TIME OS

replenishment interval We look at examples of two types of systems: 5 10 15 20 ➜ configurable hard real time systems time Slide 68 Slide 70 (0) exection starts, 1st replenishment interval starts • system designed as real time OS from the start (3) thread blocks ➜ hard real-time variants of general purpose OSs (5) continues execution, 2nd replenishment interval starts • try to alleviate shortcomings of OS with respect to real time (7) budget exhausted apps (13) budget set to 3, thread continues execution (16) budget exhausted (18) budget set to 2 (19) thread continues execution

Example: Network interface control Driver ➜ use expected incoming rate and desired max CPU utilisation of thread to compute execution budget and replenishment period REAL-TIME SUPPORT IN LINUX ➜ if no other threads wait for execution, packets can be processed even if load is higher ➜ Scheduling: ➜ otherwise, packets may be dropped - POSIX SCHED FIFO, SCHED RR, - ongoing efforts to improve scheduler efficiency Slide 69 Slide 71 packet every 64µs receiver ➜ Virtual Memory: thread - no VM for real-time apps 25µs/packet - mlock() and mlockall() to switch off paging ➜ Timer: resolution: 10ms, too coarse grained for real-time apps ➜ period: 64µs * 16 = 1024µs ➜ execution time: 25µs * 16 = 400µs ➜ CPU load caused by receiver thread: 400/1024 = 0.39, about 39%

HARD REAL TIME OS 35 HIGH KERNEL LATENCY IN LINUX 36 QNX HIGH KERNEL LATENCY IN LINUX ➜ Microkernel based architecture ➜ POSIX standard API Possible solutions: ➜ Modular — can be costumised for very small size (eg, ➜ Low Latency Linux embedded systems) or large systems - thread in kernel mode yields CPU ➜ Memory protection for user applications and os components - reduces size of non-preemptable sections Slide 72 Slide 74 Scheduling: - used in some real-time variants of Linux ➜ FIFO scheduling ➜ Preemptable Linux ➜ Round-robin - kernel data protected using mutexes/spinlocks ➜ Adaptive scheduling ➜ Lock breaking preemptable Linux - thread consumes its timeslice, its priority is reduced by one - combination of previous two approaches - thread blocks, it immediately comes back to its base priority ➜ POSIX sporadic scheduling

RTLINUX WINDOWS CE 3.0 ➜ abstract machine layer between actual hardware and Linux Componentised OS designed for embedded systems with kernel hard real-time support ➜ takes control of ➜ handles nested interrupts Slide 73 - hardware interrupts Slide 75 ➜ handles priority inversion based on priority inheritance - timer hardware - interrupt disable mechanism Offers ➜ real time scheduler runs with no interference fron Linux kernel ➜ guaranteed upper bound on high priority thread scheduling ➜ programmer must utilise RTLinux API for real time applications ➜ guaranteed upper bound on delay for interrupt service routines

QNX 37 WINDOWS CE 3.0 38 Linux scheduling:

A minimum B middle DBCA C middle WINDOWS 2000 SCHEDULING D maximum ➜ Priorities organized into two bands or classes Slide 76 Slide 78 (a) Relative thread priorities (b) Flow with FIFO scheduling • Real-time • Variable ➜ Priority-driven preemptive scheduler

DBCB C A

(c) Flow with RR scheduling

Highest (31)

Real-time UNIX SVR4 SCHEDULING Priority Classes ➜ Highest preference to real-time processes ➜ Next-highest to kernel-mode processes ➜ Lowest preference to other user-mode processes Lowest (16) Slide 77 Slide 79 SVR4 dispatch queues: Highest (15)

dqactmap 0 ¥ ¥ ¥ 1 ¥ ¥ ¥ 1 1 0

dispq 159 ¥ ¥ ¥ n ¥ ¥ ¥ 2 1 0

P P P Variable P P Priority P P P Classes

Lowest (0)

WINDOWS 2000 SCHEDULING 39 WINDOWS 2000 SCHEDULING 40 15 14 13 12 11 10 9 8 7 Slide 80 6 highest 5 above normal 4 base priority normal 3 below normal 2 lowest 1 0

Process Thread’s Base Thread’s Dynamic Priority Priority Priority

WINDOWS 2000 SCHEDULING 41