Lecture #5: the Rate-Monotonic Theory for Real-Time Scheduling

Total Page:16

File Type:pdf, Size:1020Kb

Lecture #5: the Rate-Monotonic Theory for Real-Time Scheduling EEM202A/CSM213A - Fall 2011 Lecture #5: The Rate-Monotonic Theory for Real-time Scheduling Mani Srivastava [email protected] Networked & Embedded Systems Lab UCLA Copyright (c) 2011 Reading List for This Lecture • MANDATORY READING ‣ L. Sha, R. Rajkumar, S.S. Sathaye. Generalized rate-monotonic scheduling theory: a framework for developing real-time systems. Proceedings of the IEEE, vol.82, (no.1), p.68-82, Jan. 1994. - http://ieeexplore.ieee.org/iel1/5/6554/00259427.pdf?arnumber=259427 ‣ L. Sha, R. Rajkumar, and J.P. Lehoczky. Priority inheritance protocols: an approach to real-time synchronization. IEEE Transactions on Computers, vol.39, (no.9), Sept. 1990. p.1175-85. - http://beru.univ-brest.fr/~singhoff/cheddar/publications/sha90.pdf • RECOMMENDED READING ‣ C.L. Liu, and J.W. Layland. Scheduling algorithms for multiprogramming in a hard-real-time environment. Journal of the Association for Computing Machinery, vol.20, (no.1), Jan. 1973. p. 46-61. - http://citeseer.ist.psu.edu/liu73scheduling.html ‣ J. Lehoczky, L. Sha, and Y. Ding. The rate monotonic scheduling algorithm: exact characterization and average case behavior. Proceedings of Real Time Systems Symposium, p. 166-171, Dec. 1989. ‣ Enrico Bini, Giorgio Buttazzo and Giuseppe Buttazzo, "Rate Monotonic Analysis: The Hyperbolic Bound", IEEE Transactions on Computers, Vol. 52, No. 7, pp. 933-942, July 2003. - http://feanor.sssup.it/~giorgio/paps/2003/ieeetc-hb.pdf 2 Scheduling Tasks with Timing Constraints • Allocate processor time to tasks so that critical timing constraints are met ‣ Tasks have an associated terminating function that needs to be executed • Variants of the scheduling problem ‣ Statically known vs. dynamic task set ‣ Single vs. multiple processor ‣ One shot vs. repetitive - In repetitive, multiple instances (or requests) of a task arrive ‣ Sporadic vs. periodic ‣ Singe rate vs. multi-rate periodic tasks ‣ Stateless vs. stateful tasks ‣ Dependent vs. independents tasks ‣ Known vs. unknown arrival times • This lecture: static priority based preemptive scheduling ‣ Single processor ‣ Known # of independent but stateful tasks with infinitely many instances ‣ Arrival times of task instances are unknown but periodic with different rates 3 Computation & Timing Model of the System • Requests for tasks for which hard deadlines exist are periodic, with constant inter-request intervals but otherwise unknown request arrival times • Run-time for each task is constant for that task, and does not vary with time ‣ can be interpreted as the maximum running time • Deadlines consist of runnability constraints only ‣ each task must finish before the next request for it ‣ eliminates need for buffering to queue tasks • Tasks are independent ‣ requests for a certain task does not depend on the initiation or completion of requests for other tasks ‣ however, their periods may be related • Other implicit assumptions ‣ No task can implicitly suspend itself, e.g. for I/O ‣ All tasks are fully preemptible ‣ All kernel overheads are zero 4 Formally Characterizing the Task Set • Set Γ of n independent tasks τ1, τ2, … τn • Request periods are T1, T2, ... Tn ‣ Request rate of τi is 1/Ti ‣ τi,j indicates j-th instance of i-th task (∀ j = 1, 2, …) • Request phases R1, R2, ... Rn are unknown ‣ Ri represents the arrival time of τi,1 ‣ Task set is called concrete if all Ri are known, non-concrete otherwise ‣ Here we restrict to non-concrete task sets • Run-times are C1, C2, ... Cn ‣ These are worst case execution time • Relative deadlines are D1, D2, ... Dn ‣ Special case: Di == Ti 5 Scheduling Algorithm • Set of rules to determine the task to be executed at a particular moment • One possibility: Preemptive & priority driven ‣ Tasks are assigned priorities - Static or fixed approach - priorities are assigned to tasks once for all - Dynamic approach - priorities of tasks may change from request to request - Mixed approach - some tasks have fixed priorities, others don’t ‣ At any instant, the highest priority task is run - Whenever there is a request for a task that is of higher priority than the one currently being executed, the running task is interrupted, and the newly requested task is started • Therefore, scheduling algorithm == method to assign priorities 6 Importance of Scheduling Algorithm • Consider ‣ Task 1 with period T1 = 50 ms, and a worst-case execution time of C1 = 25 ms ‣ Task 2 with T2 = 100 ms and C2 = 40 ms. ‣ Each task instance should finish before the next instance of that task arrives • CPU utilization, Ui, of task i is Ci/Ti, so that U1 = 50% and U2 = 40% ‣ This means total requested utilization U = U1 + U2 = 90% • Is there enough CPU time? 7 http://www.netrino.com/Embedded-Systems/How-To/RMA-Rate-Monotonic-Algorithm Importance of Scheduling Algorithm • Consider ‣ Task 1 with period T1 = 50 ms, and a worst-case execution time of C1 = 25 ms ‣ Task 2 with T2 = 100 ms and C2 = 40 ms. ‣ Each task instance should finish before the next instance of that task arrives • CPU utilization, Ui, of task i is Ci/Ti, so that U1 = 50% and U2 = 40% ‣ This means total requested utilization U = U1 + U2 = 90% • Is there enough CPU time? 7 http://www.netrino.com/Embedded-Systems/How-To/RMA-Rate-Monotonic-Algorithm Deriving Optimum Fixed Priority Assignment Rule 8 Critical Instant of a Task • Definitions ‣ Response time of a request of a certain task is the time span between the request and the end of response to that task ‣ Critical instant for a task = time instant at which a request for that task will have the maximum response time ‣ Critical time zone of a task = time interval between a critical instant & the deadline of the corresponding request of the task • Can use Critical Instant to determine whether a given priority assignment will yield a feasible scheduling algorithm ‣ If requests for all tasks at their critical instants are fulfilled before their respective deadlines, then the scheduling algorithm is feasible • Theorem 1: A critical instant for any task occurs whenever the task is requested simultaneously with requests of all higher priority tasks 9 Example • Consider τ1 & τ2 with T1=2, T2=5, & C1=1, C2=1 • Case 1: τ1 has higher priority than τ2 ‣ Priority assignment is feasible ‣ Can increase C2 to 2 and still avoid overflow T1 T1 t t 1 2 3 4 5 1 2 3 4 5 T T2 t 2 t 1 2 3 4 5 1 2 3 4 5 CRITICAL TIME ZONE CRITICAL TIME ZONE C2=1 C2=2 10 Example (contd.) • Case 2: τ2 has higher priority than τ1 ‣ Priority assignment is still feasible ‣ But, can’t increase beyond C1=1, C2=1 T2 t 1 2 3 4 5 T1 t 1 2 3 4 5 CRITICAL TIME ZONE Case 1 seems to be the better priority assignment for schedulability… how do we formalize this? 11 Observation • Consider τ1 & τ2 with T1 < T2 • Let, τ1 be the higher priority task. From Theorem 1, the following must hold: ⎣T2/ T1⎦C1 + C2 ≤ T2 (necessary condition, but not sufficient) • Let, τ2 be the higher priority task. The following must hold: C1 + C2 ≤ T1 • Note: C1 + C2 ≤ T1 ⇒ ⎣T2/ T1⎦C1 + ⎣T2/ T1⎦C2 ≤ ⎣T2/ T1⎦ T1 ⇒ ⎣T2/ T1⎦C1 + ⎣T2/ T1⎦C2 ≤ T2 ⇒ ⎣T2/ T1⎦C1 + C2 ≤ T2 since ⎣T2/ T1⎦ ≥ 1 • Therefore, whenever T1 < T2 and C1, C2 are such that the task schedule is feasible with τ2 at higher priority than τ1, then it is also feasible with τ1 at higher priority than τ2 ‣ but opposite is not true 12 A Possible Rule for Priority Assignment • Assign priorities according to request rates, independent of run times ‣ Higher priorities for tasks with higher request rates ‣ For tasks τi and τj, if Ti < Tj, Priority(τi) > Priority(τj) • Called Rate-Monotonic (RM) Priority Assignment ‣ It is optimum among static priority based schemes • Theorem 2: No other fixed priority assignment can schedule a task set if RM priority assignment can’t schedule it, i.e., if a feasible priority assignment exists, then RM priority assignment is feasible 13 Intuitive Proof for RM Optimality • Consider n tasks {τ1, τ2, … τn} s.t. T1 < T2 < …. < Tn • Let they be schedulable with non-RM priorities {Pr(1), …, Pr(n)} ‣ ∴ ∃ at least one pair of adjacent tasks, τp and τp+1, such that Pr(p) < Pr(p+1) [higher value is higher priority] • Swap the priorities of tasks τp and τp+1 ‣ New priority of τp is Pr(p+1), new priority of τp+1 is Pr(p) ‣ Note that Pr(p+1) > Pr(p) (by assumption 1) • Tasks {τ1, …, τp-1} should not get affected ‣ We are only affecting lower priority tasks • Tasks {τp+2, …, τn} should also not get affected ‣ Both τp and τp+1 need to be executed (irrespective of the order) before any task in {τp+2, …, τn} gets executed • Task τp should not get affected ‣ Since we are only increasing its priority 14 Proof (contd.) • Consider τp+1: ‣ Since original schedule is feasible, in the time interval [0, Tp], exactly one instance of τp and τp+1 complete execution along with (possibly multiple) instances of tasks in {τ1, …, τp-1} - Note that τp+1 executes before τp. ‣ New schedule is identical, except that τp executes before τp+1 (start/ end times of higher priority tasks is same) - Still, exactly one instance of τp and τp+1 complete in [0, Tp]. Since Tp < Tp+1, task τp+1 is schedulable Old schedule t {τ1, …, τp-1} Tp Tp+1 τp New schedule t τp+1 Tp Tp+1 We have proven that swapping the priority of two adjacent tasks to make their priorities in accordance with RM does not affect the schedulability (i.e., all tasks {τ1, τ2, … τn} are still schedulable) 15 Proof (contd.) • If τp and τp+1 are the only such non RM tasks in original schedule, we are done since the new schedule will be RM • If not, starting from the original schedule, using a sequence of such re- orderings of adjacent task pairs, we can ultimately arrive at an RM schedule (Exactly the same as bubble sort) • E.g., Four tasks with initial priorities {3, 1, 4, 2} for {τ1, τ2, … τ4} {3 1 4 2} is schedulable {3 4 1 2} is schedulable RM priority {4 3 1 2} is schedulable assignment {4 3 2 1} is schedulable Hence, Theorem 2 is proved.
Recommended publications
  • Last Time Today Response Time Vs. RM Computing Response Time More Second-Priority Task I Response Time
    Last Time Today Priority-based scheduling Response time analysis Static priorities Blocking terms Dynamic priorities Priority inversion Schedulable utilization And solutions Rate monotonic rule: Keep utilization below 69% Release jitter Other extensions Response Time vs. RM Computing Response Time Rate monotonic result WC response time of highest priority task R Tells us that a broad class of embedded systems meet their 1 time constraints: R1 = C 1 • Scheduled using fixed priorities with RM or DM priority Hopefully obvious assignment • Total utilization not above 69% WC response time of second-priority task R 2 However, doesn’t give very good feedback about what is Case 1: R 2 ≤ T1 going on with a specific system • R2 = C 2 + C 1 Response time analysis R T T Tells us for each task, what is the longest time between R1 2 1 2 when it is released and when it finishes Then these can be compared with deadlines 1 1 Gives insight into how close the system is to meeting / not 2 meeting its deadline Is more precise (rejects fewer systems) More Second-Priority Task i Response Time Case 2: T 1 < R 2 ≤ 2T 1 General case: R = C + 2C Ri 2 2 1 Ri = Ci + ∑ Cj j ∀j∈hp (i) T R1 T1 R2 2T 1 T2 1 1 1 hp(i) is the set of tasks with priority higher than I 2 2 Only higher-priority tasks can delay a task Case 3: 2T 1 < R 2 ≤ 3T 1 Problem with using this equation in practice? R2 = C 2 + 3C 1 General case of the second-priority task: R2 = C 2 + ceiling ( R 2 / T 1 ) C 1 1 Computing Response Times Response Time Example Rewrite
    [Show full text]
  • Preemption-Based Avoidance of Priority Inversion for Java
    Preemption-Based Avoidance of Priority Inversion for Java Adam Welc Antony L. Hosking Suresh Jagannathan [email protected] [email protected] [email protected] Department of Computer Sciences Purdue University West Lafayette, IN 47906 Abstract their actions via mutual exclusion locks. The resulting programming model is reasonably simple, Priority inversion occurs in concurrent programs when but unfortunately unwieldy for large-scale applications. A low-priority threads hold shared resources needed by some significant problem with using low-level, lock-based syn- high-priority thread, causing them to block indefinitely. chronization primitives is priority inversion. We propose a Shared resources are usually guarded by low-level syn- new solution for priority inversion that exploits close coop- chronization primitives such as mutual-exclusion locks, eration between the compiler and the run-time system. Our semaphores, or monitors. There are two existing solu- approach is applicable to any language that offers the fol- tions to priority inversion. The first, establishing high- lowing mechanisms: level scheduling invariants over synchronization primitives to eliminate priority inversion a priori, is difficult in practice • Multithreading: concurrent threads of control execut- and undecidable in general. Alternatively, run-time avoid- ing over objects in a shared address space. ance mechanisms such as priority inheritance still force • Synchronized sections: lexically-delimited blocks high-priority threads to wait until desired resources are re- of code, guarded by dynamically-scoped monitors. leased. Threads synchronize on a given monitor, acquiring it We describe a novel compiler and run-time solution to on entry to the block and releasing it on exit.
    [Show full text]
  • Real-Time Operating Systems (RTOS)
    Real-Time Operating Systems (RTOS) 101 Real-Time System Characteristics RTOS Architecture Rate Monotonic • A real-time system is a computer system which is required by its specification to adhere to: Scheduling (RMS) – functional requirements (behavior) • A priority is assigned based on the inverse of its pe- – temporal requirements (timing constraints, deadlines) riod • Specific deterministic timing (temporal ) requirements – Shorter execution periods = higher priority – “Deterministic" timing means that RTOS services consume only – Longer execution periods = lower priority known and expected amounts of time. • Common way to assign fixed priorities • Small size (footprint) – If there is a fixed-priority schedule that meets all dead- lines, then RMS will produce a feasible schedule Types of Real-Time Systems • Simple to understand and implement • A generic real-time system requires that results be produced RTOS Task Services • P1 is assigned a higher priority than P2. within a specified deadline period. • An embedded system is a computing device that is part of a • Scheduling and Dispatching larger system. • Inter-task Communication • A safety-critical system is a real-time system with catastro- phic results in case of failure. • Memory System Management • A hard real-time system guarantees that real-time tasks be Earliest Deadline First (EDF) • Input / Output System Management completed within their required deadlines. Failure to meet a single deadline may lead to a critical catastrophic system • Scheduling Time Management & Timers failure such as physical damage or loss of life. • Priorities are assigned according to deadlines: • Error Management • A firm real-time system tolerates a low occurrence of missing – the earlier the deadline, the higher the priority a deadline.
    [Show full text]
  • Real-Time Operating System (RTOS)
    Real-Time Operating System ELC 4438 – Spring 2016 Liang Dong Baylor University RTOS – Basic Kernel Services Task Management • Scheduling is the method by which threads, processes or data flows are given access to system resources (e.g. processor time, communication bandwidth). • The need for a scheduling algorithm arises from the requirement for most modern systems to perform multitasking (executing more than one process at a time) and multiplexing (transmit multiple data streams simultaneously across a single physical channel). Task Management • Polled loops; Synchronized polled loops • Cyclic Executives (round-robin) • State-driven and co-routines • Interrupt-driven systems – Interrupt service routines – Context switching void main(void) { init(); Interrupt-driven while(true); } Systems void int1(void) { save(context); task1(); restore(context); } void int2(void) { save(context); task2(); restore(context); } Task scheduling • Most RTOSs do their scheduling of tasks using a scheme called "priority-based preemptive scheduling." • Each task in a software application must be assigned a priority, with higher priority values representing the need for quicker responsiveness. • Very quick responsiveness is made possible by the "preemptive" nature of the task scheduling. "Preemptive" means that the scheduler is allowed to stop any task at any point in its execution, if it determines that another task needs to run immediately. Hybrid Systems • A hybrid system is a combination of round- robin and preemptive-priority systems. – Tasks of higher priority can preempt those of lower priority. – If two or more tasks of the same priority are ready to run simultaneously, they run in round-robin fashion. Thread Scheduling ThreadPriority.Highest ThreadPriority.AboveNormal A B ThreadPriority.Normal C ThreadPriority.BelowNormal D E F ThreadPriority.Lowest Default priority is Normal.
    [Show full text]
  • Guaranteeing Real-Time Performance Using Rate Monotonic Analysis
    Carnegie Mellon University Software Engineering Institute Guaranteeing Real-Time Performance Using Rate Monotonic Analysis Embedded Systems Conference September 20-23, 1994 Presented by Ray Obenza Software Engineering Institute Carnegie Mellon University Pittsburgh PA 15213 Sponsored by the U.S. Department of Defense Carnegie Mellon University Software Engineering Institute Rate Monotonic Analysis Introduction Periodic tasks Extending basic theory Synchronization and priority inversion Aperiodic servers Case study: BSY-1 Trainer 1 Introduction Carnegie Mellon University Software Engineering Institute Purpose of Tutorial Introduce rate monotonic analysis Explain how to perform the analysis Give some examples of usage Convince you it is useful 2 Introduction Carnegie Mellon University Software Engineering Institute Tutorial Format Lecture Group exercises Case study Questions welcome anytime 3 Introduction Carnegie Mellon University Software Engineering Institute RMARTS Project Originally called Real-Time Scheduling in Ada Project (RTSIA). • focused on rate monotonic scheduling theory • recognized strength of theory was in analysis Rate Monotonic Analysis for Real-Time Systems (RMARTS) • focused on analysis supported by (RMS) theory • analysis of designs regardless of language or scheduling approach used Project focused initially on uniprocessor systems. Work continues in distributed processing systems. 4 Introduction Carnegie Mellon University Software Engineering Institute Real-Time Systems Timing requirements • meeting deadlines Periodic
    [Show full text]
  • RTOS Northern Real Time Applications (NRTA)
    By N.J. Keeling, Director of Marketing, RTOS Northern Real Time Applications (NRTA). Missed it! - How Priority Inversion messes up real-time performance and how the Priority Ceiling Protocol puts it right. In hard real-time systems it is important that everything runs on-time, every time. To do this efficiently it is necessary to make sure urgent things are done before less urgent things. When code is developed using a real-time operating system (RTOS) that offers pre-emptive scheduling, each task can be allocat- ed a level of priority. The best way to set the priority of each task is according to the urgency of the task: the most urgent task gets the highest priority (this ordering scheme is known as Deadline Monotonic). PRE-EMPTIVE MULTI-TASKING there is a requirement for data to be shared between ulti-tasking through use of an RTOS sched- the various tasks in the application. Clearly this is a typ- uler enables the processor to be shared ical situation: imagine an I/O data stream that is imple- M between a number of tasks in an applica- mented with two tasks. The first (called H) is a high pri- tion. In most priority schemes the scheduler is pre- ority task and drives communications hardware. It emptive: a task of a higher priority will 'break in' on the takes data from a buffer and places it onto a network. execution of lower priority tasks and take over the The task needs to run and finish by a short deadline, processor until it finishes or is pre-empted in its turn by which is why it has a high priority.
    [Show full text]
  • Multitasking and Real-Time Scheduling
    Multitasking and Real-time Scheduling EE8205: Embedded Computer Systems http://www.ee.ryerson.ca/~courses/ee8205/ Dr. Gul N. Khan http://www.ee.ryerson.ca/~gnkhan Electrical and Computer Engineering Ryerson University Overview • RTX - Preemptive Scheduling • Real-time Scheduling Techniques . Fixed-Priority and Earliest Deadline First Scheduling • Utilization and Response-time Analysis • Priority Inversion • Sporadic and Aperiodic Process Scheduling Chapter 6 of the Text by Wolf, Chapter 13 of Text by Burns and Wellings and Keil-RTX documents © G. Khan Embedded Computer Systems–EE8205: Real-time Scheduling Page: 1 Priority-driven Scheduling Rules: • each process has a fixed priority (1 lowest); • highest-priority ready process gets CPU; • process continues until done. Processes • P1: priority 3, execution time 10 • P2: priority 2, execution time 30 • P3: priority 1, execution time 20 P3 ready t=18 P2 ready t=0 P1 ready t=15 P2 P1 P2 P3 time 0 10 20 30 40 50 60 © G. Khan Embedded Computer Systems–EE8205: Real-time Scheduling Page: 2 RTX Scheduling Options RTX allows us to build an application with three different kernel- scheduling options: • Pre-Emptive scheduling Each task has a different priority and will run until it is pre- empted or has reached a blocking OS call. • Round-Robin scheduling Each task has the same priority and will run for a fixed period, or time slice, or until has reached a blocking OS call. • Co-operative multi-tasking Each task has the same priority and the Round-Robin is disabled. Each task will run until it reached a blocking OS call or uses the os_tsk_pass() call.
    [Show full text]
  • Lock-Free Programming
    Lock-Free Programming Geoff Langdale L31_Lockfree 1 Desynchronization ● This is an interesting topic ● This will (may?) become even more relevant with near ubiquitous multi-processing ● Still: please don’t rewrite any Project 3s! L31_Lockfree 2 Synchronization ● We received notification via the web form that one group has passed the P3/P4 test suite. Congratulations! ● We will be releasing a version of the fork-wait bomb which doesn't make as many assumptions about task id's. – Please look for it today and let us know right away if it causes any trouble for you. ● Personal and group disk quotas have been grown in order to reduce the number of people running out over the weekend – if you try hard enough you'll still be able to do it. L31_Lockfree 3 Outline ● Problems with locking ● Definition of Lock-free programming ● Examples of Lock-free programming ● Linux OS uses of Lock-free data structures ● Miscellanea (higher-level constructs, ‘wait-freedom’) ● Conclusion L31_Lockfree 4 Problems with Locking ● This list is more or less contentious, not equally relevant to all locking situations: – Deadlock – Priority Inversion – Convoying – “Async-signal-safety” – Kill-tolerant availability – Pre-emption tolerance – Overall performance L31_Lockfree 5 Problems with Locking 2 ● Deadlock – Processes that cannot proceed because they are waiting for resources that are held by processes that are waiting for… ● Priority inversion – Low-priority processes hold a lock required by a higher- priority process – Priority inheritance a possible solution L31_Lockfree
    [Show full text]
  • Generalized Rate-Monotonic Scheduling Theory: a Framework for Developing Real-Time Systems
    Generalized Rate-Monotonic Scheduling Theory: A Framework for Developing Real-Time Systems LUI SHA, SENIOR MEMBER, IEEE, RAGUNATHAN RAJKUMAR, AND SHIRISH S. SATHAYE Invited Paper Real-time computing systems are used to control telecommuni- Stability under transient overload. When the system is cation systems, defense systems, avionics, and modern factories. overloaded by events and it is impossible to meet all Generalized rate-monotonic scheduling theory is a recent devel- the deadlines, we must still guarantee the deadlines of opment that has had large impact on the development of real-time systems and open standards. In this paper we provide an up- selected critical tasks. to-date and selfcontained review of generalized rate-monotonic Real-time scheduling is a vibrant field. Several important scheduling theory. We show how this theory can be applied in research efforts are summarized in [25] and [26]. Among practical system development, where special attention must be them, Generalized Rate Monotonic Scheduling (GRMS) given to facilitate concurrent development by geographically dis- tributed programming teams and the reuse of existing hardware theory is a useful tool that allows system developers to and software components. meet the above measures by managing system concur- rency and timing constraints at the level of tasking and message passing’. In essence, this theory ensures that I. INTRODUCTION as long as the system utilization of all tasks lies below Real-time computing systems are critical to an industrial- a certain bound, and appropriate scheduling algorithms ized nation’s technological infrastructure. Modern telecom- are used, all tasks meet their deadlines. This puts the munication systems, factories, defense systems, aircraft development and maintenance of real-time systems on an and airports, space stations, and high-energy physics ex- analytic, engineering basis, making these systems easier to periments cannot operate without them.
    [Show full text]
  • Comparison of CPU Scheduling in Vxworks and Lynxos
    Comparison of CPU scheduling in VxWorks and LynxOS Henrik Carlgren Ranjdar Ferej [email protected] [email protected] Abstract The main objective of a scheduler in a hard real-time system is that tasks are finished before their deadline. A secondary objective is to do this as effective as possible. This comparison will look into how the two well known hard real-time systems; VxWorks and LynxOS handle the CPU scheduling problem. This comparison will also look into reason to why some problems are solved in similar ways and why some are solved in different ways. Introduction The CPU scheduling problem in hard real-time systems consists of a few smaller problems, to most of the problems there is a few standardised robust solutions all with their pros and cons. With this knowledge we can assume that the solutions will be quite similar and the differences will depend on the fact that the two products to some extent have different definitions of what are the most important things in a real-time operating system. The algorithms that we describe for each of the real-time operating systems will be described as clear as possible so we can see the differences that are expected to be quite small. Also reason for these differences will be explained as clearly as possible. Another interesting side of this comparison is if there are any significant differences that can be important to take into consideration when choosing a hard real-time operating system for a project with real-time requirements. LynxOS The details of the algorithms presented here are based on version 4.2 of LynxOS which conceptually should be the same as version 4.0 but with minor improvements and bug fixes.
    [Show full text]
  • Real-Time and Embedded Operating Systems What Is Real Time?
    Real-Time and Embedded Operating Systems What is Real Time? Real-time: Systems where the correctness of computation depends on the timing of the results Embedded: Systems that tightly interact with the physical world 1 Embedded and Real-Time Computing Classical Applications Advanced Embedded Systems The Next Frontier Trend: • Invisible (embedded) computing, implicit interfaces (users need only 1 mobile device – rest should be non-intrusive) • Context-aware computing (new sensors, new effectors) • Ubiquitous – instrument what we use most (attire, personal effects, …) Processors Embedded Everywhere - Transparent - Context-aware - Mobile - Miniature - Ubiquitous (Smart attire, smart spaces, …) Today 2 Embedded Networked Systems RFID Embedded Networks Device Applications Industrial Networks Networks Remote Sensing Networks Medical Networks Smart Space Networks Embedded Computing Computing occurs in physical context. It must be aware of physical real-world properties: Time Energy Physical space and context 3 So What Does “Real-Time” Mean Again? Why Predictability? Example: Going to the Airport Which route would you choose? •Route 1: 15 min ($1 Toll) •Route 2: 5 min - 45 min, with 15 min average (Free) You pay for predictability 4 The Task Model Typically, periodic tasks Each task invocation must complete before the next one starts (deadlines = periods) How to Ensure Predictability? Real-time operating systems are distinguished by mechanisms they use to ensure predictable task execution 5 Predictability: Mechanism #1: Scheduling Real-time operating systems feature predictable scheduling policies Utilization Bounds Intuitively, for a given scheduling policy: The lower the processor utilization, U, the easier it is to meet deadlines. The higher the processor utilization, U, the more difficult it is to meet deadlines.
    [Show full text]
  • Alleviating Priority Inversion and Non-Determinism in Real-Time CORBA ORB Core Architectures 1 Introduction 2 Real-Time ORB Core
    Alleviating Priority Inversion and Non-determinism in Real-time CORBA ORB Core Architectures Douglas C. Schmidt, Sumedh Mungee, Sergio Flores-Gaitan, and Aniruddha Gokhale g fschmidt,sumedh,sergio,gokhale @cs.wustl.edu Department of Computer Science, Washington University St. Louis, MO 63130, USA th This paper appeared in the proceedings of 4 IEEE Real- is the component in the CORBA reference model that man- time Technology and Applications Symposium (RTAS), Den- ages transport connections, delivers client requests to an Ob- ver, Colorado, June 3-5, 1998. ject Adapter, and returns responses (if any) to clients. The ORB Core also typically implements the transport endpoint Abstract demultiplexing and concurrency architecture used by applica- tions. Figure 1 illustrates how an ORB Core interacts with There is increasing demand to extend Object Request Bro- other CORBA components. [2] describes each of these com- ker (ORB) middleware to support distributed applications with stringent real-time requirements. However, conventional ORB in args operation() implementations, such as CORBA ORBs, exhibit substantial CLIENT SERVANT priority inversion and non-determinism, which makes them un- out args + return value suitable for applications with deterministic real-time require- ments. This paper provides two contributions to the study and IDL DSI design of real-time ORB middleware. First, it illustrates em- SKELETON IDL DII ORB OBJECT pirically why conventional ORBs do not yet support real-time STUBS quality of service. Second, it evaluates connection and concur- INTERFACE ADAPTER rency software architectures to identify strategies that reduce ORB /IIOP priority inversion and non-determinism in real-time CORBA GIOP CORE ORBs. STANDARD INTERFACE STANDARD LANGUAGE Keywords: Real-time CORBA Object Request Broker, QoS- MAPPING enabled OO Middleware, Performance Measurements ORB-SPECIFIC INTERFACE STANDARD PROTOCOL 1 Introduction Figure 1: Components in the CORBA Reference Model Meeting the QoS needs of next-generation distributed appli- ponents in more detail.
    [Show full text]