Architectural Support for Real-Time Computing Using Generalized Rate Monotonic Theory
Total Page:16
File Type:pdf, Size:1020Kb
744 計 測 と 制 御 シ テ ア Vol.31, No.7 集 ル タ イ ム 分 ス ム 特 リ 散 (1992年7月) ≪展 望 ≫ Architectural Support for Real-Time Computing using Generalized Rate Monotonic Theory Lui SHA* and Shirish S. SATHAYE** Abstract The rate monotonic theory and its generalizations have been adopted by national high tech- nology projects such as the Space Station and has recently been supported by major open stand- ards such as the IEEE Futurebus+ and POSIX. 4. In this paper, we focus on the architectural support necessary for scheduling activities using the generalized rate monotonic theory. We briefly review the theory and provide an application example. Finally we describe the architec- tural requirements for the use of the theory. Key Words: Real-Time Scheduling, Distributed Real-Time System, Rate Monotonic scheduling ● Stability under transient overload. When the 1. Introduction system is overloaded by events and it is im- Real-time computing systems are critical to an possible to meet all the deadlines, we must industrialized nation's technological infrastructure. still guarantee the deadines of selected critical Modern telecommunication systems, factories, de- tasks. fense systems, aircrafts and airports, space stations Generalized rate monotonic scheduling (GRMS) and high energy physics experiments cannot op- theory allows system developers to meet the erate .without them. In real-time applications, above requirements by managing system concur- the correctness of computation depends upon not rency and timing constraints at the level of task- only its results but also the time at which outputs ing and message passing. In essence, this theory are generated. The measures of merit in a real- ensures that as long as system utilization of all time system include: tasks lies below a certain bound, and appropriate ● Predictably fast response to urgent events. scheduling algorithms are used, all tasks meet ● High degree of schedulability. Schedulability their deadlines. This puts the development and is the degree of resource utilization at or be- maintenance of real-time systems on an analytic, low which the timing requirements of tasks engineering basis, making these systems easier to can be ensured. It can be thought as a mea- develop and maintain. sure of the number of timely transactions per This theory begins with the pioneering work second. by Liu and Layland16' in which the rate mono- tonic algorithm was introduced for scheduling in- * Software Engineering Institute , Carnegie Mellon dependent periodic tasks. The rate monotonic University, Pittsburgh PA 15213, U.S.A. ** Electrical and Computer Engineering Depart- scheduling (RMS) algorithm gives higher priorities ment, Carnegie Mellon University to periodic tasks with higher rates. RMS is an ** The author is with Digital Equipment Corpora- optimal static priority scheduling algorithm for tion's Distributed Systems Architecture Group, and is also currently a Ph. D candidate at Car- independent periodic tasks with end of period negie Mellon University deadlines. RMS theory has since been generalized JL 0007/92/3107-0744 (C) 1992 SICE L. SHA•ES.S. SATHAYE: Architectural Support for Real-Time Computing using Generalized Rate Monotonic Theory 745 to analyze the schedulability of aperiodic tasks 2. A System Model with both soft deadlines and hard deadlines24), interdependent tasks that must synchronize19),18), In this section we describe a simple model of tasks with deadlines shorter than periods15), tasks a distributed real-time system that serves as a with arbitrary deadlines13), and single tasks having vehicle to illustrate GRMS theory. Fig. 1 shows multiple code segments with different priority a distributed system consisting of several nodes assignment8). RMS has also been extended to connected by a network. Each node in the net- analyze wide area network scheduling23). RMS work is a multiprocessor. Each processor in the has been applied to improve response times of node has a CPU, memory and an operating sys- aperiodic messages in a token ring network25). tem (OS). The processors communicate over a Cache algorithms for real-time systems using RMS shared backplane bus. We assume that the OS were developed in9). and the backplane bus support priority schedul- Because of its versatility and ease of use, GRMS ing. For example the OS could be POSIX. 417) has gained rapid acceptance. For example it is and the backplane could be Futurebus+6),21). The used for developing real-time software in the network could be a token ring25) or a dual link NASA Space Station Freedom Program7), the network23) that support GRMS. European Space Agency5) and is supported by the Each node in the system consists of signal pro- IEEE Futurebus+ Standard6) and IEEE Posix. cessors and control processors. In addition to 417). GRMS has been previously reviewed in20),14) performing signal processing and control func- and22). Uniprocessor scheduling and implications tions, nodes send system status information peri- to Ada tasking is described in20), major theoret- odically to a display node that interfaces with ical results are reviewed in14), some important operators. An operator may send commands to R & D decisions in the development of this theory nodes whenever the need arises. Each signal are examined in22)(1). processor in a node is connected to a sensor. The This paper focuses on architectural support for results of each signal processor are periodically the engineering of distributed real-time systems sent to a tracking processor which is a high per- using GRMS. We first review the essential ele- formance numeric processor dedicated to tracking ments of GRMS that are needed for the develop- the motion of objects. The result of tracking is ment of a distributed system at a relatively fast periodically sent over the bus to the control pro- pace(2). We then illustrate the application of cessor. The control processors are general pur- GRMS in the design of a hypothetical distributed pose computers which perform feedback control real-time system. Finally we describe architec- tural support for using GRMS. The paper is organized as follows. In Section 2 we describe a distributed real-time system model that will be used to illustrate the application of the theory in the rest of the paper. Section 3 reviews the basic elements of GRMS. Section 4 illustrates the use of the theory. Sections, 5 describes architec- tural support for application of GRMS. Section 6 has some concluding remarks. (1) A handbook on using GRMS for real-time system analysis and design is currently under development at the Software Engineering Instit- ute, CMU. (2) Additional examples and illustrations can be found in20). Fig. 1 Block Diagram of Distributed System 746 1992年7月 計 測 と 制 御 第31巻 第7号 tasks and communicates with operators via the the other hand, we can deposit one unit of network. service time in a •gticket box•h every 100 units The architecture utilizes both tasking and mes- of time; when a new •gticket•h is deposited, sage passing paradigms. Application software is the unused old tickets, if any, are discarded. partitioned into allocation units each of which With this approach, no matter when the ape- can be allocated to a processor. An allocation riodic request arrives during a period of 100, unit groups together closely related application it will find there is a ticket for one unit of functions implemented as tasks. Tasks within an execution time at the ticket box. That is, ƒÑ2 allocation unit communicate via shared variables. can use the ticket to preempt ƒÑ1 and execute Tasks in different allocation units communicate immediately when the request occurs. In this via messages. Allocation units can be freely re- case, ƒÑ2's response time is precisely one unit located as long as the resulting configuration is and the deadlines of ƒÑ1 are still guaranteed. still schedulable. This is the idea behind a class of aperiodic server algorithms11) that can reduce aperiodic re- 3. Overview of Generalized Rate sponse time by a large factor (a factor of 50 in Monotonic Scheduling this example). We allow the aperiodic servers In this section we review basic results which to preempt the periodic tasks for a limited dura- allow us to design a distributed system with fea- tion that is allowed by the rate monotonic sched- tures described in Section 2. We begin with the uling formula. An aperiodic server algorithm scheduling of independent periodic and aperiodic called the Sporadic Server that handles hard tasks. We then address the issues of task syn- deadline aperiodic tasks is described in24). Instead chronization and the effect of having task dead- of refreshing the server's budget periodically , at lines before the end of their periods. fixed points in time, replenishment is determined 3.1 Scheduling Independent Tasks by when requests are serviced. In the simplest A periodic task ƒÑi is characterized by a worst approach, the budget is refreshed one period after case computation time Ci and a period Ti. Unless it has been exhausted, but earlier refreshing is mentioned otherwise we assume that a periodic also possible. task must finish by the end of its period. Tasks A sporadic server is only allowed to preempt are independent if they do not need to synchro- the execution of periodic tasks as long as its com- nize with each other. A real-time system typical- putation budget is not exhausted. When the ly consists of both periodic and aperiodic tasks. budget is used up, the server can continue to ex- The scheduling of aperiodic tasks can be treated ecute at background priority if time is available. within the rate monotonic framework of periodic When the server's budget is refreshed, its execu- task scheduling. For example, tion can resume at the server's assigned priority. Example 1: Suppose that we have two tasks. There is no overhead if there are no requests.