
EEM202A/CSM213A - Fall 2011 Lecture #5: The Rate-Monotonic Theory for Real-time Scheduling Mani Srivastava [email protected] Networked & Embedded Systems Lab UCLA Copyright (c) 2011 Reading List for This Lecture • MANDATORY READING ‣ L. Sha, R. Rajkumar, S.S. Sathaye. Generalized rate-monotonic scheduling theory: a framework for developing real-time systems. Proceedings of the IEEE, vol.82, (no.1), p.68-82, Jan. 1994. - http://ieeexplore.ieee.org/iel1/5/6554/00259427.pdf?arnumber=259427 ‣ L. Sha, R. Rajkumar, and J.P. Lehoczky. Priority inheritance protocols: an approach to real-time synchronization. IEEE Transactions on Computers, vol.39, (no.9), Sept. 1990. p.1175-85. - http://beru.univ-brest.fr/~singhoff/cheddar/publications/sha90.pdf • RECOMMENDED READING ‣ C.L. Liu, and J.W. Layland. Scheduling algorithms for multiprogramming in a hard-real-time environment. Journal of the Association for Computing Machinery, vol.20, (no.1), Jan. 1973. p. 46-61. - http://citeseer.ist.psu.edu/liu73scheduling.html ‣ J. Lehoczky, L. Sha, and Y. Ding. The rate monotonic scheduling algorithm: exact characterization and average case behavior. Proceedings of Real Time Systems Symposium, p. 166-171, Dec. 1989. ‣ Enrico Bini, Giorgio Buttazzo and Giuseppe Buttazzo, "Rate Monotonic Analysis: The Hyperbolic Bound", IEEE Transactions on Computers, Vol. 52, No. 7, pp. 933-942, July 2003. - http://feanor.sssup.it/~giorgio/paps/2003/ieeetc-hb.pdf 2 Scheduling Tasks with Timing Constraints • Allocate processor time to tasks so that critical timing constraints are met ‣ Tasks have an associated terminating function that needs to be executed • Variants of the scheduling problem ‣ Statically known vs. dynamic task set ‣ Single vs. multiple processor ‣ One shot vs. repetitive - In repetitive, multiple instances (or requests) of a task arrive ‣ Sporadic vs. periodic ‣ Singe rate vs. multi-rate periodic tasks ‣ Stateless vs. stateful tasks ‣ Dependent vs. independents tasks ‣ Known vs. unknown arrival times • This lecture: static priority based preemptive scheduling ‣ Single processor ‣ Known # of independent but stateful tasks with infinitely many instances ‣ Arrival times of task instances are unknown but periodic with different rates 3 Computation & Timing Model of the System • Requests for tasks for which hard deadlines exist are periodic, with constant inter-request intervals but otherwise unknown request arrival times • Run-time for each task is constant for that task, and does not vary with time ‣ can be interpreted as the maximum running time • Deadlines consist of runnability constraints only ‣ each task must finish before the next request for it ‣ eliminates need for buffering to queue tasks • Tasks are independent ‣ requests for a certain task does not depend on the initiation or completion of requests for other tasks ‣ however, their periods may be related • Other implicit assumptions ‣ No task can implicitly suspend itself, e.g. for I/O ‣ All tasks are fully preemptible ‣ All kernel overheads are zero 4 Formally Characterizing the Task Set • Set Γ of n independent tasks τ1, τ2, … τn • Request periods are T1, T2, ... Tn ‣ Request rate of τi is 1/Ti ‣ τi,j indicates j-th instance of i-th task (∀ j = 1, 2, …) • Request phases R1, R2, ... Rn are unknown ‣ Ri represents the arrival time of τi,1 ‣ Task set is called concrete if all Ri are known, non-concrete otherwise ‣ Here we restrict to non-concrete task sets • Run-times are C1, C2, ... Cn ‣ These are worst case execution time • Relative deadlines are D1, D2, ... Dn ‣ Special case: Di == Ti 5 Scheduling Algorithm • Set of rules to determine the task to be executed at a particular moment • One possibility: Preemptive & priority driven ‣ Tasks are assigned priorities - Static or fixed approach - priorities are assigned to tasks once for all - Dynamic approach - priorities of tasks may change from request to request - Mixed approach - some tasks have fixed priorities, others don’t ‣ At any instant, the highest priority task is run - Whenever there is a request for a task that is of higher priority than the one currently being executed, the running task is interrupted, and the newly requested task is started • Therefore, scheduling algorithm == method to assign priorities 6 Importance of Scheduling Algorithm • Consider ‣ Task 1 with period T1 = 50 ms, and a worst-case execution time of C1 = 25 ms ‣ Task 2 with T2 = 100 ms and C2 = 40 ms. ‣ Each task instance should finish before the next instance of that task arrives • CPU utilization, Ui, of task i is Ci/Ti, so that U1 = 50% and U2 = 40% ‣ This means total requested utilization U = U1 + U2 = 90% • Is there enough CPU time? 7 http://www.netrino.com/Embedded-Systems/How-To/RMA-Rate-Monotonic-Algorithm Importance of Scheduling Algorithm • Consider ‣ Task 1 with period T1 = 50 ms, and a worst-case execution time of C1 = 25 ms ‣ Task 2 with T2 = 100 ms and C2 = 40 ms. ‣ Each task instance should finish before the next instance of that task arrives • CPU utilization, Ui, of task i is Ci/Ti, so that U1 = 50% and U2 = 40% ‣ This means total requested utilization U = U1 + U2 = 90% • Is there enough CPU time? 7 http://www.netrino.com/Embedded-Systems/How-To/RMA-Rate-Monotonic-Algorithm Deriving Optimum Fixed Priority Assignment Rule 8 Critical Instant of a Task • Definitions ‣ Response time of a request of a certain task is the time span between the request and the end of response to that task ‣ Critical instant for a task = time instant at which a request for that task will have the maximum response time ‣ Critical time zone of a task = time interval between a critical instant & the deadline of the corresponding request of the task • Can use Critical Instant to determine whether a given priority assignment will yield a feasible scheduling algorithm ‣ If requests for all tasks at their critical instants are fulfilled before their respective deadlines, then the scheduling algorithm is feasible • Theorem 1: A critical instant for any task occurs whenever the task is requested simultaneously with requests of all higher priority tasks 9 Example • Consider τ1 & τ2 with T1=2, T2=5, & C1=1, C2=1 • Case 1: τ1 has higher priority than τ2 ‣ Priority assignment is feasible ‣ Can increase C2 to 2 and still avoid overflow T1 T1 t t 1 2 3 4 5 1 2 3 4 5 T T2 t 2 t 1 2 3 4 5 1 2 3 4 5 CRITICAL TIME ZONE CRITICAL TIME ZONE C2=1 C2=2 10 Example (contd.) • Case 2: τ2 has higher priority than τ1 ‣ Priority assignment is still feasible ‣ But, can’t increase beyond C1=1, C2=1 T2 t 1 2 3 4 5 T1 t 1 2 3 4 5 CRITICAL TIME ZONE Case 1 seems to be the better priority assignment for schedulability… how do we formalize this? 11 Observation • Consider τ1 & τ2 with T1 < T2 • Let, τ1 be the higher priority task. From Theorem 1, the following must hold: ⎣T2/ T1⎦C1 + C2 ≤ T2 (necessary condition, but not sufficient) • Let, τ2 be the higher priority task. The following must hold: C1 + C2 ≤ T1 • Note: C1 + C2 ≤ T1 ⇒ ⎣T2/ T1⎦C1 + ⎣T2/ T1⎦C2 ≤ ⎣T2/ T1⎦ T1 ⇒ ⎣T2/ T1⎦C1 + ⎣T2/ T1⎦C2 ≤ T2 ⇒ ⎣T2/ T1⎦C1 + C2 ≤ T2 since ⎣T2/ T1⎦ ≥ 1 • Therefore, whenever T1 < T2 and C1, C2 are such that the task schedule is feasible with τ2 at higher priority than τ1, then it is also feasible with τ1 at higher priority than τ2 ‣ but opposite is not true 12 A Possible Rule for Priority Assignment • Assign priorities according to request rates, independent of run times ‣ Higher priorities for tasks with higher request rates ‣ For tasks τi and τj, if Ti < Tj, Priority(τi) > Priority(τj) • Called Rate-Monotonic (RM) Priority Assignment ‣ It is optimum among static priority based schemes • Theorem 2: No other fixed priority assignment can schedule a task set if RM priority assignment can’t schedule it, i.e., if a feasible priority assignment exists, then RM priority assignment is feasible 13 Intuitive Proof for RM Optimality • Consider n tasks {τ1, τ2, … τn} s.t. T1 < T2 < …. < Tn • Let they be schedulable with non-RM priorities {Pr(1), …, Pr(n)} ‣ ∴ ∃ at least one pair of adjacent tasks, τp and τp+1, such that Pr(p) < Pr(p+1) [higher value is higher priority] • Swap the priorities of tasks τp and τp+1 ‣ New priority of τp is Pr(p+1), new priority of τp+1 is Pr(p) ‣ Note that Pr(p+1) > Pr(p) (by assumption 1) • Tasks {τ1, …, τp-1} should not get affected ‣ We are only affecting lower priority tasks • Tasks {τp+2, …, τn} should also not get affected ‣ Both τp and τp+1 need to be executed (irrespective of the order) before any task in {τp+2, …, τn} gets executed • Task τp should not get affected ‣ Since we are only increasing its priority 14 Proof (contd.) • Consider τp+1: ‣ Since original schedule is feasible, in the time interval [0, Tp], exactly one instance of τp and τp+1 complete execution along with (possibly multiple) instances of tasks in {τ1, …, τp-1} - Note that τp+1 executes before τp. ‣ New schedule is identical, except that τp executes before τp+1 (start/ end times of higher priority tasks is same) - Still, exactly one instance of τp and τp+1 complete in [0, Tp]. Since Tp < Tp+1, task τp+1 is schedulable Old schedule t {τ1, …, τp-1} Tp Tp+1 τp New schedule t τp+1 Tp Tp+1 We have proven that swapping the priority of two adjacent tasks to make their priorities in accordance with RM does not affect the schedulability (i.e., all tasks {τ1, τ2, … τn} are still schedulable) 15 Proof (contd.) • If τp and τp+1 are the only such non RM tasks in original schedule, we are done since the new schedule will be RM • If not, starting from the original schedule, using a sequence of such re- orderings of adjacent task pairs, we can ultimately arrive at an RM schedule (Exactly the same as bubble sort) • E.g., Four tasks with initial priorities {3, 1, 4, 2} for {τ1, τ2, … τ4} {3 1 4 2} is schedulable {3 4 1 2} is schedulable RM priority {4 3 1 2} is schedulable assignment {4 3 2 1} is schedulable Hence, Theorem 2 is proved.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages87 Page
-
File Size-