University of Pennsylvania ScholarlyCommons

Technical Reports (CIS) Department of Computer & Information Science

January 2009

A Compositional Framework for Avionics (ARINC-653) Systems

Arvind Easwaran University of Pennsylvania

Insup Lee University of Pennsylvania, [email protected]

Oleg Sokolsky University of Pennsylvania, [email protected]

Steve Vestal Honeywell International Inc.

Follow this and additional works at: https://repository.upenn.edu/cis_reports

Recommended Citation Arvind Easwaran, Insup Lee, Oleg Sokolsky, and Steve Vestal, "A Compositional Framework for Avionics (ARINC-653) Systems", . January 2009.

University of Pennsylvania Department of Computer and Information Science Technical Report No. MS-CIS-09-04

This paper is posted at ScholarlyCommons. https://repository.upenn.edu/cis_reports/898 For more information, please contact [email protected]. A Compositional Framework for Avionics (ARINC-653) Systems

Abstract Cyber-physical systems (CPSs) are becoming all-pervasive, and due to increasing complexity they are designed using component-based approaches. Temporal constraints of such complex CPSs can then be modeled using hierarchical frameworks. In this paper, we consider one such avionics CPS described by ARINC specification 653-2. The eal-timer workload in this system comprises of partitions, where each partition consists of one or more processes. Processes incur blocking and overheads, and can communicate with other processes in the system. In this work, we develop techniques for automated scheduling of such partitions. At present, system designers manually schedule partitions based on interactions they have with application vendors. This approach is not only time consuming, but can also result in under utilization of resources. Hence, in this work we propose compositional analysis based scheduling techniques for partitions.

Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MS- CIS-09-04

This technical report is available at ScholarlyCommons: https://repository.upenn.edu/cis_reports/898 A Compositional Scheduling Framework for Digital Avionics Systems

Arvind Easwaran∗ Insup Lee, Oleg Sokolsky Steve Vestal CISTER/IPP-HURRAY Department of CIS Boston Scientific Polytechnic Institute of Porto University of Pennsylvania MN 55112, USA Portugal PA, 19104, USA [email protected] [email protected] lee,sokolsky @cis.upenn.edu { }

Abstract global global processor processor ARINC specification 653-2 describes the interface between application software and underlying middleware in a dis- tributed real-time avionics system. The real-time workload in this system comprises of partitions, where each partition con- local local local local sists of one or more processes. Processes incur blocking and 1 2 3 4 preemption overheads, and can communicate with other pro- P P P P ...... cesses in the system. In this work, we develop compositional τ τ τ τ τ τ τ τ τ τ τ τ techniques for automated scheduling of such partitions and 1,1 1,2 1,m1 2,1 2,2 2,m2 3,1 3,2 3,m3 4,1 4,2 4,m4 processes. At present, system designers manually schedule Blocking (semaphore) partitions based on interactions they have with the partition Communication chain (end−to−end latency bound) vendors. This approach is not only time consuming, but can also result in under utilization of resources. Figure 1. Scheduling hierarchy for partitions

1 Introduction single processor in a core module can therefore be described as a two-level hierarchical real-time system. Each partition comprises of one or more processes that are scheduled among ARINC standards, developed and adopted by the Engi- themselves using a (local) partition specific scheduler. All neering Standards for Avionics and Cabin Systems commit- the partitions that are allocated to the same processor are then tee, deliver substantial benefits to airlines and aviation in- scheduled among themselves using a (global) partition level dustry by promoting competition, providing inter changeabil- scheduler. For example, Figure 1 shows two such systems, ity, and reducing life-cycle costs for avionics and cabin sys- where partitions and are scheduled together under a tems. In particular, the 600 series ARINC specifications and 1 2 global scheduler onP one processor,P and partitions and reports define enabling technologies that provide a design 3 4 are scheduled together under a global scheduler onP anotherP foundation for digital avionics systems. Within the 600 se- processor. Each partition in turn is comprised of processes ries, this work deals with ARINC specification 653-2, part i τ , . . . , τ , scheduledP under a local scheduler1. Processes I [3] (henceforth referred to as ARINC-653), which defines a i,1 i,mi are periodic tasks that communicate with each other. Se- general-purpose Application/Executive (APEX) software in- quences of such communicating processes form dependency terface between the operating system of an avionics computer chains, and designers can specify end-to-end latency bounds and the application software. for them. For example, Figure 1 shows one such chain be- As described in ARINC-653, the real-time system of an tween tasks τ , τ , and τ . Processes within a partition aircraft comprises of one or more core modules connected 1,1 2,2 3,2 can block each other using semaphores for access to shared with one another using switched Ethernet. Each core module data, giving rise to blocking overhead (tasks τ and τ is a hardware platform that consists of one or more processors 4,2 4,m4 in the figure). Further, processes and partitions can also be among other things. They provide space and temporal par- preempted by higher priority processes and partitions, respec- titioning for independent execution of avionics applications. tively, resulting in preemption overheads. Each independent application is called a partition, and each partition in turn is comprised of one or more processes rep- There are several problems related to the hierarchical sys- resenting its real-time resource demand. The workload on a tem described above that must be addressed. For schedul-

∗Work done when author was a PhD student at the University of Pennsyl- 1The local scheduler can be different from the global scheduler and each vania, USA, and a summer intern at Honeywell Inc., USA. of the other local schedulers.

1 ing partitions, it is desirable to abstract the communication techniques to take into account (a) communi- dependencies between processes using parameters like off- cations modeled as offsets, jitter, and constrained dead- sets, jitter, and constrained deadlines. This simplifies a global lines, and (b) process preemption and blocking over- processor and network scheduling problem into several local heads. Section 3 presents this solution, and illustrates its single processor scheduling problems. The process deadlines effectiveness using actual workloads from avionics sys- must also guarantee satisfaction of end-to-end latency bounds tems. specified by the designer. Given such processes we must then generate scheduling parameters for partitions, to be used 2. We develop techniques to schedule partitions using their by the global scheduler. The resulting global schedule must interfaces, taking into account preemption overheads in- provide sufficient processor capacity to schedule processes curred by partitions. Specifically, in Section 4, we within partitions. Furthermore, these scheduling parameters present a technique to count the exact number of pre- must also account for blocking and preemption overheads in- emptions incurred by partitions in the global schedule. curred by processes and partitions. This avionics system frequently interacts with the physi- 2 System model and related work cal world, and hence is subject to stringent government reg- ulations. Then, to help with system certification, it is desir- Partitions and processes. Each partition has an asso- able to develop schedulability analysis techniques for such ciated period that identifies the frequency with which it exe- hierarchical systems. Furthermore, these analysis techniques cutes, i.e., it represents the partition interface period. Typi- must account for resource overheads arising from preemp- cally, this period is derived from the periods of processes that tions, blocking, and communication. In order to protect the form the partition. In this work, we assume that partitions intellectual property rights of partition vendors, it is also de- are scheduled among themselves using deadline-monotonic sirable to support partition isolation; only so much informa- (DM) scheduler [16]. This enables us to generate a static par- tion about partitions must be exposed as is required for global tition level schedule at design time (hyper-period schedule), scheduling and the corresponding analysis. We therefore con- as required by the specification. Processes within a partition sider compositional techniques for partition scheduling, i.e., are assumed to be periodic tasks2. ARINC-653 allows pro- we schedule partitions and check their schedulability by com- cesses to be scheduled using preemptive, fixed priority sched- posing interfaces, which abstractly represent the resource de- ulers, and hence we assume that each partition also uses DM mand of processes within partitions. to schedule processes in its workload. Partition workloads can be abstracted into interfaces using As discussed in the introduction, we assume that commu- existing compositional techniques [17, 11, 23, 9]. These tech- nication dependencies and end-to-end latency requirements niques use resource models as interfaces, which are models are modeled with process offsets, jitter, and constrained dead- characterizing resource supply from higher level schedulers. lines. Hence, each process can be specified as a constrained In the context of ARINC-653, these resource model based in- deadline periodic task τ = (O, J, T, , D), where O is offset, terfaces can be viewed as abstract resource supplies from the J is jitter, T is period, C is worst case execution time, and global scheduler to each partition. Various resource models D( T) is deadline. Jobs of this process are dispatched at like periodic [17, 23], bounded-delay [11], and EDP [9] have time≤ instants x T + O for every non-negative integer x, and been proposed in the past. However, before we can use these each job will be released for execution at any time in the in- techniques, we must modify them to handle ARINC-653 spe- terval [x T + O, x T + O + J]. For such a process it is rea- cific issues like communication dependencies, and blocking sonable to assume that O D [24]. Furthermore, we denote ≤ and preemption overheads. In this paper, we assume that com- as τ , . . . , τ , DM , a partition comprising of processes h{ 1 n} i P munication dependencies and end-to-end latency bounds are τ 1, . . . , τ n and using scheduler DM. Without loss of general- abstracted using existing techniques into process parameters ity we assume that τ i has higher priority than τ j for all i < j like offset, jitter, and constrained deadline (see [24, 21]). Note under DM. that although we do not present solutions to this problem, it In addition to the restrictions specified so far, we make the is however important, because it motivates the inclusion of following assumptions for the system described herein. These aforementioned process parameters. assumptions have been verified to exist in avionics systems. Contributions. In this paper we model ARINC-653 as (1) The processes within a partition, and hence the partition a two-level hierarchical system, and develop compositional itself, cannot be distributed over multiple processors. (2) Peri- analysis techniques for the same. This is a principled ap- ods of partitions that are scheduled on the same processor are proach for scheduling ARINC-653 partitions that provides harmonic3. Note that this assumption does not prevent pro- separation of concerns among different partition vendors, and cesses from having non-harmonic periods. (3) Processes in a therefore should facilitate system integration. In particular, partition cannot block processes in another partition. This is our contributions can be summarized as follows: 2Partitions with aperiodic processes also exist in avionics systems, but they are scheduled as background workload. Hence, we ignore them. 3 1. We extend and improve existing periodic [17] and A set of numbers {T1,..., Tn} is harmonic if and only if, for all i and EDP [9] resource model based compositional analysis j, either Ti divides Tj or Tj divides Ti.

2 because mutual exclusion based on semaphores require use of shortcomings. shared memory which can only happen within a partition. Related work. Traditionally, the partition scheduling 3.1 Inadequacy of existing analysis problem has been addressed in an ad-hoc fashion based on interactions between the system designer and vendors who A periodic process such as the one described earlier, con- provide the partitions. Although many different ARINC-653 sists of an infinite set of real-time jobs that are required to platforms exist (see [1, 2]), there is little work on automatic meet temporal deadlines. The resource request bound func- scheduling of partitions [14, 15, 20]. Kinnan et. al. [14] tion of a process upper bounds the amount of computational only provide preliminary heuristic guidance, and the other resource required to meet all its temporal deadlines (rbf : studies [15, 20] use constraint-based approaches to look at ). Similarly, the request bound function of a parti- combined network and processor scheduling. In contrast to tion< → is

3 ( max {0, t − x2 − y2 Π} + y2 Θ t ≥ ∆ − Θ τ τ sbfη(t) = (4) 3 3 0 Otherwise τ 2 τ 2 When processes in a partition have zero offset and jitter values, conditions for schedulability of the partition using a τ 1 τ 1 τ 1 τ 1 periodic or EDP resource model have been proposed in the τ = (2, 1, 2)     past [23, 9]. These conditions can be easily extended for pro- 1     cesses with non-zero jitter, and is presented below. τ = (4, 1, 4)   2       Theorem 1 A partition = τ 1 =   τ 3 = (4, 1, 4) (0, J , T , C , D ), . . . , τ =P (0, J , T , C hh, D ) , DM ,   1 1 1 1 n n n n n i i where τ j has higher priority than τ k for all j < k, is Process release Process deadline schedulable over a periodic or EDP resource model R iff

∀i, 1 ≤ i ≤ n, ∃ti ∈ (0, Di − Ji] s.t. rbfP,i(ti) ≤ sbfR(ti), Figure 2. Tasks with harmonic periods where rbf ,i is as defined in Equation (2). P task τ φ is identical to its relative deadline. For the ARINC- Periodic or EDP resource model based interface for parti- 653 partitions, this means that partitions scheduled on a pro- tion can be generated using Theorem 1. For this purpose, cessor are abstracted into periodic tasks with harmonic peri- 4 we assumeP that the period of resource model R is equal to Π . ods. When such implicit deadline periodic tasks are sched- If R is a periodic resource model, then techniques presentedP uled under DM, every job of a task is scheduled in the same in [23] can be used to develop a periodic model based inter- time instants within its execution window. This follows from face. Since we are interested in minimizing processor usage the observation that whenever a job of a task is released, all (and hence resource bandwidth), we must compute the small- the higher priority tasks also release a job at the same time. For example, Figure 2 shows the schedule for a periodic task est Θ that satisfies this theorem. Hence, for each process τ i, set τ 1 = (2, 1, 2), τ 2 = (4, 1, 4), τ 3 = (4, 1, 4) . It can be we solve for different values of ti and choose the smallest Θ { } among them. Θ for model R is then given by the largest value seen that every job of τ 3 is scheduled in an identical manner of Θ among all processes in . Similarly, if R is an EDP within its execution window. resource model then EaswaranP et. al. [9] have presented a Whenever task τ φ is executing, the resource is available for use by periodic model φ. This means that resource sup- technique that uses this theorem to compute a resource model ply allocations for φ also occur in an identical manner within having smallest bandwidth. However, as described in the in- intervals (n Π, (n + 1) Π], for all n 0. In other words, the troduction, processes can be more accurately modeled using blackout interval x in sbf can never≥ exceed Π Θ. For the 1 φ − non-zero offset values. Then, a major drawback in using the example shown in Figure 2, assuming task τ 3 is transformed aforementioned techniques is that Theorem 1 only gives suffi- from a periodic resource model φ = 4, 1 , the blackout in- 3 h i cient schedulability conditions. This follows from the fact that terval for φ3 can never exceed 3. Therefore, the general sbf the critical arrival pattern used by Equation (2) is pessimistic for periodic models given in Equation (3) is pessimistic for for processes with non-zero offset. Additionally, these tech- our case. Improved sbfφ is defined as follows. niques do not take into account preemption and blocking over- — t   — t  ff heads incurred by processes. sbfφ(t) = Θ + max 0, t − (Π − Θ) − Π (5) Π Π In the following sections we extend Theorem 1 to accom- modate processes with non-zero offsets, as well as to account For a EDP resource model η = Π, Θ, ∆ , the blackout h i for blocking and preemption overheads. Recollect from Sec- interval in sbfη is Π + ∆ 2 Θ [9]. Since ∆ Θ is a nec- tion 2 that all the partitions scheduled on a processor are as- essary condition, this blackout− interval can never≥ be smaller sumed to have harmonic interface periods. This observation than Π Θ. Then, there will be no advantage in using EDP leads to a tighter supply bound function for periodic resource models− for partition interfaces over periodic models. There- models when compared to the general case. Therefore, we fore, we focus on periodic models in the remainder of this first present a new sbf for periodic resource models, and then paper. extend Theorem 1. 3.3 Schedulability condition for partitions 3.2 sbf under harmonic interface periods Request function. When processes have non-zero off- sets, identifying the critical arrival pattern to compute rbf is In the technique described in [23], a periodic interface φ = a non-trivial task. It has been shown that this arrival pattern Π, Θ is transformed into a periodic task τ = (Π, Θ, Π), φ could occur anywhere in the interval [0, LCM], where LCM beforeh i it is presented to the global scheduler. Note that the 4 period of model φ and task τ φ are identical, and period (Π) of Tasks with D = T.

4 denotes least common multiple of process periods (see [13]). its deadline. Furthermore, dispatch pattern of processes in As a result, no closed form expression for rbf is known in is periodic with period LCM . Therefore, it is sufficientP to this case 5. Therefore, we now introduce the request function check the schedulability of allP jobs in the interval [0, LCM ]. P (rf : ), which for a given time interval gives the We now prove statement (2). Consider the job of τ with < × < → < i maximum possible amount of resource requested by the par- latest release time tx. For this job to be schedulable under tition in that interval. Since rf computes the resource request resource model φ, higher priority interference encountered by for a specific time interval as opposed to an interval length, it the job in interval [tx, tx + t) must be satisfied by resource can be computed without knowledge of the critical arrival pat- model φ. This higher priority interference arises from pro- tern. When processes have non-zero jitter in addition to non- cesses released before tx, as well as from those released at or zero offsets, we must compute rf ,i assuming an arrival pat- after t . Condition rf (t , t) sbf (t t ) guarantees that P x ,i x φ x tern that results in the maximum higher priority interference φ provides enough supplyP to satisfy≤ the interference− from pro- for τ i. The following definition gives this arrival pattern for a cesses released at or after tx. To account for the interference job of τ i with latest release time t, where t = Oi + Ji +x Ti from processes released before tx, we have the second con- for some non-negative integer x. dition, i.e., rf ,i(0, t) sbfφ(t). This condition ensures that the minimumP resource≤ provided by φ in an interval of length Definition 1 (Arrival pattern with jitter [24]) Recall that a t, is at least as much as the total higher priority interference job of process τ = O, J, T, C, D is dispatched at time up to time t. This proves that these conditions are sufficient instant x T + O forh some non-negativei integer x, and for schedulability of partition . can be released for execution at any time in the interval P [x T + O, x T + O + J]. Then, a job of τ with latest release We now prove that these conditions are also necessary i for schedulability of . For this purpose, observe that time t, incurs maximum interference from higher priority pro- P rf ,i(0, t) sbfφ(t) is a necessary condition to guarantee cesses in whenever, (1) all higher priority processes with P ≤ dispatch timeP before t are released at or before t with maxi- that resource model φ satisfies the higher priority interference mum jitter, and (2) all higher priority processes with dispatch in interval [0, t). Furthermore, this condition alone is not suffi- time at or after t are released with zero jitter. cient, because it does not guarantee that φ will provide enough resource in interval [tx, t). The second condition ensures this The request function for processes with non-zero offset and property. 2 jitter values is then given by the following equation.

i „‰ ı ‰ ı« Periodic resource model based interface for partition X t2 − Oj t1 − Oj − Jj P rfP,i(t1, t2) = − Cj (6) can be generated using Theorem 2. Assuming period Π is Tj Tj j=1 equal to Π , we can use this theorem to compute the small- est capacityP Θ that guarantees schedulability of . When Schedulability conditions. The following theorem P presents exact schedulability conditions for partition under compared to Theorem 1, this theorem represents a compu- periodic resource model φ. P tationally expensive (exponential versus pseudo-polynomial), but more accurate interface generation technique. In fact, for Theorem 2 Let = τ , . . . , τ denote a set of pro- many avionics systems we expect this technique to be com- T { 1 n} cesses, where for each i, τ i = (Oi, Ji, Ti, Ci, Di). Partition putationally efficient as well. For instance, if process peri- = , DM is schedulable using a periodic resource model ods are harmonic as in many avionics systems, then LCM Pφ = hTΠ, Θ iffi i : 1 i n, t s.t. t + D O J < P h i ∀ ≤ ≤ ∀ x x i − i − i is simply the largest process period, and our technique has LCM and tx = Oi + Ji +x Ti for some non-negative inte- P pseudo-polynomial complexity in this case. ger x, t (tx, tx + Di Oi Ji] such that ∃ ∈ − − Although Theorem 2 presents an exact schedulability con- rfP,i(0, t) ≤ sbfφ(t) and rfP,i(tx, t) ≤ sbfφ(t − tx) (7) dition for , it ignores the preemption and blocking overheads incurred byP processes in . Hence, in the following section, rf ,i is given by Equation (6) and sbfφ is given by Equa- we extend our definition ofP rf to account for these overheads. tionP . Also, denotes the least common multiple of (5) LCM Blocking and preemption overheads. Recollect that process periods P . T1,..., Tn processes incur blocking overhead because of mutual exclu- sion requirements modeled using semaphores. Blocking oc- Proof To prove that these conditions are sufficient for curs when a lower priority process is executing in a critical schedulability of , we must validate the following state- section, and a higher priority process cannot preempt this ments: (1) it is sufficientP to check schedulability of all jobs lower priority process. In this case the higher priority process whose deadlines lie in the interval [0, LCM ], and (2) Equa- P is said to be blocked by the lower priority process, resulting tion (7) guarantees that the job of τ with latest release time i in blocking overheads. Assuming critical sections span entire t , is schedulable using periodic resource model φ. x process executions, two properties of this overhead can be de- Since D T and O D for all i, no process released i i i i rived immediately: (1) this overhead varies with each job of before LCM≤ can execute≤ beyond LCM without violating P P a process, and (2) any job of a process can be blocked by at 5 rbfP,i defined in Equation (2) is only an upper bound. most one lower priority process.

5 τ l τ i+2 τ i+1 τ l restored. Thus, every preemption results in an execution over-  head associated with storing and restoring of process con-  τ l texts. Many different techniques for bounding this preemp-  

 tion overhead have been proposed in the past (see [22, 10]).

τ i



 Ramaprasad and Mueller [22] have proposed a preemption



 upper bound for processes scheduled under Rate Monotonic τ i+1

scheduler (RM), and their technique can be extended to other



 τ i+2 fixed priority schedulers. However, they only present an al-

process release process deadline gorithm to bound the preemptions, but do not give any closed time instant t form equations. Easwaran et. al. [10] have proposed an an- alytical upper bound for the number of preemptions under fixed priority schedulers. They presented these bounds for Figure 3. Illustrative example for BO ,l,i(t) P processes with non-zero offset values and zero jitter. These equations can be easily extended to account for jitter in pro- cess releases, as well as for blocking overheads. We assume Consider a process set = τ , . . . , τ and partition 1 n that an upper bound on the number of preemptions is ob- = , DM . We now presentT { an approach} to bound the P hT i tained using one such existing technique. Furthermore, we let blocking overhead for a job of process τ l released at time t. Specifically, we compute the bound when this job is blocked PO ,i(t1, t2) denote this upper bound in the interval [t1, t2), forP preemptions incurred by processes that have priority at by some process having priority lower than that of τ , for i least as much as . Assuming denotes the execution over- some i l. We assume that all processes with priority lower τ i δp ≥ head incurred by processes for each preemption, request func- than τ i can potentially block this job of τ l. Our bound is given as tion with blocking and preemption overheads is given as i BOP,l,i(t) = max {min {Ik, Ck}} , (8) „‰ t − O ı ‰ t − O − J ı« k∈[i+1,...,n] X 2 j 1 j j rfP,i(t1, t2) = − Cj Tj Tj j=1 where Ik is defined as i 8 X j t k j t k + δ × PO (t , t ) + BO (t , t ) <0 Tk + Ok t or Tk + Dk t p P,i 1 2 P,j,i 1 2 (10) Tk Tk Ik = ≥ ≤ j t k j=1 : Tk + Dk t Otherwise Tk − 3.4 Interface generation for sample workloads For each process τ k, we compute its largest interference on the job of τ l released at time t, and then choose the max- We now demonstrate the effectiveness of our proposed imum over all τ k that have priority lower than τ i. Any such technique using sanitized data sets obtained from an avionics τ k released at or before t can block this job of τ l, and this system. These data sets are specified in Appendix A. There blocking overhead is at most its worst case execution time. are 7 workloads, where each workload represents a set of par- Equation (8) uses this observation to compute the interference titions scheduled on a single processor. We consider two types from τ k. Figure 3 gives an illustrative example for this block- of workloads; workloads in which tasks have non-zero off- ing overhead. Let the worst case execution requirement of sets but zero jitter (workloads 1 and 2 in Appendix A.1), and processes τ i+1 and τ i+2, shown in the figure, be 5 time units. workloads in which tasks have non-zero jitter but zero offsets Since the deadline of process τ i+1 is t + 8, its interference (workloads 3 thru 7 in Appendix A.2). on the job of τ l released at t is at most 8. However, its worst Each workload is specified using a xml schema, which case execution requirement is 5, and hence its interference is can be described as follows. The top level tag identifies the system level scheduler un- cess τ i+2 is t+3, and hence its maximum interference on this der which the entire workload is scheduled. The next level tag job of τ l is 3 time units. identifies a partition in the workload. the execution of processes τ , with j i, could be such that j ≤ min-period and max-period define the range of values for no τ k is able to execute before t. The following equation interface period, scheduler defines the scheduling algorithm presents a quantity , which bounds the block- BO ,l,i(t1, t2) used by this partition, and name defines the name of the par- ing overhead incurred byP all jobs of τ released in the interval l vmips task [t , t ). tition ( is described below). The last level tag < 1 2 offset=”” jitter=”” period=”” capacity=”” deadline=”” /> X defines a periodic process τ = (O, J, T, C, D). For work- BOP,l,i(t1, t2) = BOP,l,i(t) (9) loads 1 and 2, Table 1 in Section 3.4.1 specifies the total re- t:t∈[t1,t2) and τ l released at t P C source utilization of individual partitions ( T ). For work- When a higher priority process preempts a lower prior- loads 3 thru 7, Table 2 in Section 3.4.2 specifies the re- ity process, the context of the lower priority process must be source bandwidth reservations for individual partitions, in ad- stored for later use. When the lower priority process resumes dition to total resource utilization. This bandwidth reserva- its execution at some later time instant, this context must be tion is computed using the vmips field of the component tag

6 0.45 0.45

0.4 0.4 φ6 φ9 φ1 0.35 0.35 φ7 φ10 φ2 0.3 0.3 φ8 φ11 φ3 0.25 0.25 φ4 0.2 φ5 0.2 Bandwidth Bandwidth 0.15 0.15 0.1 0.1 0.05 0.05 0 0 5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50 Period Period (a) Using Theorem 2 (a) Using Theorem 2

0.45 0.45 0.4 0.4 0.35 0.35

0.3 0.3 φ6 0.25 φ1 0.25 φ7 0.2 φ2 0.2 φ8

Bandwidth φ Bandwidth φ 0.15 3 0.15 9 φ4 φ10 0.1 0.1 φ5 φ11 0.05 0.05 0 0 5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50 Period Period (b) Using approach in [23] (b) Using approach in [23]

Figure 4. Interfaces for partitions P1,..., P5 Figure 5. Interfaces for partitions P6,..., P11 in those workload specifications. Given a vmips value of x, have plotted these bandwidths for period values 1 and mul- x the amount of resource bandwidth reserved is equal to 17.76 . tiples of 5 up to 50. Note that since sbfφ defined in Equa- These reservations were used by system designers to allocate tion (5) is a linear function of capacity Θ, there is no need processor supply to partitions. to use a linear lower bound like the one used in [23]. Sim- We have developed a tool set that takes as input hierarchi- ilarly, we also obtained partition interfaces using Theorem 1 cal systems specified using the aforementioned xml schema, as discussed above, and their resource bandwidths are plotted and generates as output resource model based interfaces for in Figures 4(b) and 5(b). them. In the following two sections we present the results As can be seen from these plots, interfaces obtained using generated using this tool set. our approach have a much smaller resource bandwidth when compared to those obtained using the existing technique. This 3.4.1 Workloads with non-zero offsets gain in efficiency is because of two reasons: (1) we use a tighter sbf in Theorem 2 when compared to existing approach, and (2) existing approach ignores process offsets, and hence Partition Utilization Partition Utilization generates pessimistic interfaces. Although this is only an il- P 1 0.134 P 6 0.12 lustrative example, it is easy to see that the advantages of our P 2 0.056 P 7 0.1345 P 3 0.028 P 8 0.165 interface generation technique hold in general. From the plots P 4 0.1265 P 9 0.006 in Figures 4(a) and 5(a) we can also see that for some period P 5 0.0335 P 10 0.038 P 11 0.048 values, bandwidths of our periodic resource models are equal to the utilization of corresponding partitions. Since utilization of a partition is the minimum possible bandwidth of a resource Table 1. Workloads 1 and 2 model that can schedule the partition, our approach generates optimal resource models for these periods. In these plots it can In this section, we consider workloads 1 and 2 specified in also be observed that the bandwidth increases sharply beyond Appendix A.1. Firstly, we compare our proposed approach a certain period. For interfaces φ1, φ4, and φ8 correspond- with the existing well known compositional analysis tech- ing to partitions 1, 4, and 8, respectively, the bandwidth nique based on periodic resource models [23]. We assume increases sharplyP beyondP periodP 25. This increase can be at- that this technique uses Theorem 1 to generate periodic re- tributed to the fact that in these partitions the smallest process source model based partition interfaces, and therefore ignores period is also 25. In our examples, since smallest process process offsets. This approach does not account for preemp- period corresponds to the earliest deadline in a partition, re- tion and blocking overheads incurred by processes. Hence to source models with periods greater than this smallest value ensure a fair comparison, we ignore these overheads when require larger bandwidth to schedule the partition. computing interfaces using our approach as well. In Fig- Finally, we also generated partition interfaces using The- ures 4(a) and 5(a), we have plotted the resource bandwidths orem 2, taking into account preemption and blocking over- of interfaces obtained using our approach (Theorem 2). We heads. The resource bandwidth of these interfaces are plotted

7 1 Partition name Utilization Reserved Computed Overhead φ1 φ4 Workload 3 0.8 φ2 φ5 PART16 ID=16 0.01965 0.04505 0.0246 83.1% φ3 PART29 ID=29 0.199415 0.37669 0.3735 0.9% 0.6 PART35 ID=35 0.05168 0.22185 0.0717 209.4% PART20 ID=20 0.035125 0.09798 0.0589 66.3%

Bandwidth 0.4 PART32 ID=32 0.033315 0.08164 0.0781 4.5% 0.2 PART36 ID=36 0.045 0.11036 0.12 8% PART33 ID=33 0.0379 0.09178 0.0579 58−.5% 0 PART34 ID=34 5 10 15 20 25 30 35 40 45 50 0.04764 0.10755 0.0676 59.1% Period PART17 ID=17 0.00408 0.01126 0.0082 37.3% PART31 ID=31 0.00684 0.01689 0.0137 23.3% (a) Partitions P1,..., P5 Workload 4 1 PART30 ID=30 0.11225 0.23086 0.169 36.6% φ6 φ9 PART16 ID=16 0.01965 0.04505 0.0246 83.1% 0.8 φ7 φ10 PART20 ID=20 0.035125 0.09797 0.0589 66.3% φ8 φ11 PART17 ID=17 0.00408 0.01126 0.0082 37.3% 0.6 PART26 ID=26 0.13496 0.44932 0.2538 77% PART27 ID=27 0.02784 0.06869 0.0478 43.7%

Bandwidth 0.4 PART28 ID=28 0.0552 0.12106 0.0752 61% 0.2 Workload 5 PART15 ID=15 0.5208 0 0.5224 0 5 10 15 20 25 30 35 40 45 50 PART13 ID=13 0.01126 0.03378 0.0163 107.2% Period PART12 ID=12 0.0050 0.01126 0.02 43.7% − (b) Partitions P6,..., P11 Workload 6 PART16 ID=16 0.01965 0.04505 0.0246 83.1% PART19 ID=19 0.14008 0.32939 0.2284 44.2% Figure 6. Partition interfaces with blocking and PART21 ID=21 0.12751 0.30011 0.2667 12.5% preemption overheads PART22 ID=22 0.13477 0.31137 0.2631 18.3% PART17 ID=17 0.00408 0.01126 0.0082 37.3% Workload 7 PART45 ID=45 0.00325 0.02815 0.01 181.5% in Figures 6(a) and 6(b). For preemption overhead we as- Table 2. Bandwidths for workloads 3 thru 7 sumed that the overhead for each preemption δp is 0.1, and that every job of a process preempts some lower priority pro- cess. Blocking overhead was computed using the upper bound given in Equation (9). As expected, resource bandwidths of these interfaces are significantly higher in comparison to the bandwidths in Figures 4(a) and 5(a) 6. Since our preemption and blocking overheads are only upper bounds and not neces- sarily tight, the minimum bandwidths of resource models that We now compare the bandwidth of generated interfaces can schedule these partitions lie somewhere in between the with the reserved bandwidth specified by vmips field of com- two plots. ponent tags. Table 2 lists the following four parameters for each partition in workloads 3 thru 7: (1) Total utilization of P C vmips 3.4.2 Workloads with non-zero jitter the partition ( T ), (2) Reserved bandwidth ( 17.76 ), (3) In- terface bandwidth computed as described above, and (4) Per- reserved computed centage increase in bandwidth ( − 100). In this section, we consider workloads 3 thru 7 specified in computed × Appendix A.2. Since these workloads have zero offsets, we As can be seen from this table, bandwidths of partition inter- used Theorem 1 to generate periodic resource model based faces generated using our technique are significantly smaller partition interfaces. In this theorem, we used sbf given by than reserved bandwidths of partitions. However, when gen- Equation (5), and interface periods are as specified by the erating partition interfaces, we ignore the resource require- min-period and max-period fields of component tags 7. For ments of aperiodic processes in partitions. These aperiodic preemption overheads we assumed that the overhead for each processes are identified by a period field equal to zero in the preemption δp is 0.1, and that every job of a process pre- task tag. For example, they are present in partition ”PART26 empts some lower priority process. For blocking overheads ID=26” of workload 4 and partition ”PART22 ID=22” of we assumed that every lower priority process can block the workload 6. Since the workloads do not specify any deadlines process under consideration, up to its worst case execution for these processes (they execute as background processes in time. Consider the process set = τ , . . . , τ and parti- ARINC-653), we cannot determine the resource utilization of T { 1 n} tion = , DM . Then, for a process τ , its blocking these processes. Then, one may argue that the difference in re- P hT i l ∈ T overhead is equal to maxk>l Ck . served bandwidth and bandwidth computed by our technique, { } is in fact used by aperiodic processes. Although this can be 6 Y-axis in Figures 6(a) and 6(b) ranges from 0 to 1, whereas in Fig- true, our results show that even for partitions with no aperi- ures 4(a) and 5(a) it ranges from 0 to 0.45. 7Note that min-period = max-period in all the component tags in work- odic processes, there are significant savings using our tech- loads 3 thru 7. nique.

8 Context switches = 5, Execution chunks = 5 τ i are known. We also assume that the worst case execution   requirements of these processes are adjusted to account for    preemption overheads. Then, the following iterative equation   gives an upper bound for N .  i

 2 3

 (k) i−1 !

    

(k) Θi Πi−1 X Πi−1

     τ i N = 6 7 − Nj − 1 i Pi−1 Πi−1

6 Πi−1 − Θj 7 Π1 Πj

6 j=1 Πj 7 j=1



(11)  Execution chunk Preemptions (Ni)=4 (0) (k) In this equation we assume Θi = Θi and Θi = (k 1) Figure 7. Preemption count terminology Θi +Ni − δp +δp, where δp denotes the execution overhead (k) for each preemption. Ni ignores the preemption incurred 4 Partition scheduling by process τ i at the start of its execution, and hence the ad- ditional δp in capacity adjustment (see Figure 7). Then, the upper bound for N is given by that value of N (k) for which Let the partition set ,..., be scheduled on an unipro- i i P1 Pn (k) (k 1) cessor platform under DM scheduler. Furthermore, let each Ni = Ni − . partition i be represented by a periodic resource model (k) P Theorem 3 Let N ∗ denote the value of N in Equation (11) based interface φ = Πi, Θi as described in Section 3. i i i h i (k) (k 1) Without loss of generality we assume that Π ... Π . such that N = N − . Then N ∗ Ni. 1 n i i i ≥ To schedule these interfaces on the uniprocessor≤ platform,≤ we th (k) must transform each resource model into a task that the higher Proof In the k iteration, given Θi , Equation (11) com- level DM scheduler can use. For this purpose, we use the putes the number of dispatches of process τ i 1 that occur be- (k) − transformation which for interface φi generates the process fore the execution of Θi units of τ i. This computation is τ i = (0, 0, Πi, Θi, Πi). It has been shown that this transfor- done inside the ceiling function by taking into account higher mation is both necessary and sufficient w.r.t. resource require- priority interference for τ i. We then determine the number ments of φi [23]. of preemptions incurred by τ i within the execution window If each partition interface is transformed as above, then of each these dispatches of τ i 1. Since every job of a pro- − processes in the resulting set (τ 1, . . . , τ n) have implicit dead- cess executes in the same time instants relative to its release lines, zero offset values, and harmonic periods (partition pe- time, this number of preemptions is the same in each of these riods are harmonic). Liu and Layland have shown that DM execution windows, except the first and last one. In the first is an optimal scheduler for such processes [18]. In the fol- window it is smaller by one because we ignore preemption at lowing section we present a technique to count the number the start of execution of τ i. In the last window it is smaller of preemptions incurred by this process set. The partition because execution of τ i can terminate before the end of the level schedule can then be generated after adjusting execution window. Use of ceiling function implies that the first and last requirements of τ 1, . . . , τ n to account for preemption over- windows are treated similar to other execution windows, and heads. this is one factor for the upper bound. To determine the number of preemptions within each exe- 4.1 Partition level preemption overhead cution window of τ i 1, Equation (11) computes the number − of execution chunks of τ i in each window. Each set of consec- utive execution units of a process in a schedule is a single exe- Preemption overhead for partitions represented as pro- 8 cesses, can be computed using the upper bounds described cution chunk (see Figure 7) . The maximum possible number of chunks is given by Πi−1 . However, since higher priority in Section 3. However, as described in the previous section, Π1 these processes are scheduled under DM, and have harmonic processes also execute in this window, τ i does not necessarily periods, implicit deadlines, and zero offset and jitter values. have so many execution chunks. To get a tighter estimate for For such a process set, it is easy to see that every job of each Ni, we subtract the execution chunks of higher priority pro- process executes in the same time instants relative to its re- cesses from this maximum possible number. For each higher priority process τ , Πi−1 gives the number of jobs of τ in the lease time (see Figure 2). Therefore, every job of a process j Πj j is preempted an identical number of times. For this case, current execution window, and Nj gives the number of pre- we now develop an analytical technique to compute the ex- emptions incurred by each of those jobs. Then, the number of Πi−1 act number of preemptions. execution chunks of τ j in the entire window is (Nj + 1) Π . Consider the process set τ , . . . , τ defined in the previ- j 1 n However, all of these execution chunks of τ j cannot be al- ous section. For each i, let Ni denote the number of pre- ways discarded; specifically the last one. Since the response emptions incurred by each job of τ i. We first give an up- time of τ j need not necessarily coincide with a release of τ 1, per bound for Ni, and later show how to tighten this bound. For this upper bound, we assume that the number of preemp- 8Note that the number of execution chunks is always one more than the tions N1,...,Ni 1 for processes τ 1, . . . , τ i 1, respectively, number of preemptions encountered by the process. − −

9 τ i 1 τ i 1 Here Rj denotes the worst case response time of process τ j. − − Since j [1, . . . , i 1], Nj is known and therefore Rj can be τ τ j j τ j ∈ (k)0 − computed. Ni is given by the following equation. τ 1 τ 1 τ 1 τ 1 τ 1     & (k) (k) ’ i−1 & (k) (k) ’     (k)0 Ri −Ti−1 X Ri −Ti−1 τ 1     Ni = − Ij (13)     Π1 Πj j=2 τ     j     (k) Πi 1 − = 4,Nj = 1 In this equation R denotes the response time of τ with Π1 i i Possible execution chunk of τ i (k) (k) execution requirement Θi , and Ti 1 is the time of last dis- (k) (k) − Figure 8. Execution chunks of process τ patch of τ i 1. Ri Ti 1 gives the total time taken by τ i to j − − − execute in the last execution window of τ i 1. This, along with the higher priority interference in the− window, gives (k)0 τ i could potentially continue its execution immediately after Ni . The following theorem then observes that the preemp- the last execution chunk of τ j. For example, in Figure 8, τ j’s tion count generated using Equation (12) is equal to Ni. response time does not coincide with the release of τ 1, and (k) Theorem 4 Let N ∗ denote the value of N in Equation (12) hence τ i can potentially execute in the marked time intervals. i i (k) (k 1) In Equation (11) we always use Nj for the number of exe- such that Ni = Ni − . Then Ni∗ = Ni. cution chunks of τ j, and hence the result is an upper bound. (k) Finally, we subtract one from the entire number to discount In this iterative procedure as well, Θi is non-decreasing the preemption at the start of execution of τ . 2 and cannot be greater than Πi. Therefore, the computation is i of pseudo-polynomial complexity in the worst case. One may argue that the exact preemption count can also be obtained by Since (k) is non-decreasing and cannot be greater than Θi simulating the execution of processes. Since process periods Π , this iterative computation must terminate and has pseudo- i are harmonic, LCM is simply the largest process period, and polynomial complexity. This computation only gives an up- therefore the simulation also runs in pseudo-polynomial time. per bound for due to two reasons: (1) the ceiling func- Ni However, in safety critical systems such as avionics, it is often tion, and (2) use of as the count for execution chunks Nj required that we provide analytical guarantees for correctness. of process . In fact, Equation (11) cannot be used to up- τ j The iterative computation presented here serves this purpose. per bound N , because it assumes knowledge of preemption i Thus, each process τ can be modified to account counts N ,...,N . We now present a technique that over- i 1 i 1 for preemption overhead and is specified as τ = comes these shortcomings.− In particular, we modify Equa- i (0, 0, Π , Θ +(N + 1)δ , Π ). If the resulting process set tion (11) as follows: i i i p i τ , . . . , τ is schedulable9, then using Theorems 2 and 4 { 1 n} We replace ceiling with the floor function, and add a sep- we get that the underlying partitions can schedule their work- • arate expression that counts preemptions in the last exe- loads. cution window of τ i 1. − 5 Conclusions We replace N in the equation with a quantity I , which • j j is either Nj + 1 or Nj, depending on whether the re- In this paper we presented ARINC-653 standards for sponse time of τ j coincides with a release of τ 1. avionics real-time OS, and modeled it as a two level hierarchi- cal system. We extended existing resource model based tech- (k)0 Let Ni denote the preemption count for τ i in the last execu- niques to handle processes with non-zero offset values. We (k) then used these techniques to generate partition level sched- tion window of τ i 1, when Θi is the execution requirement − of τ i. Then, Ni is given by the following iterative equation. ules. Design of real-time systems in modern day air-crafts is done manually through interactions between application ven- 6 7 6 (k) 7 0 i−1 1 0 dors and system designers. Techniques presented in this pa- (k) 6 Θi 7 Πi−1 X Πi−1 (k) N = 6 7 @ Ij A+N 1 i 4 Pi−1 Πi−1 5 − i − per serve as a platform for principled design of partition level Πi−1 Θj Π1 j=1 Πj − j=1 Πj (12) schedules. They also provide analytical correctness guaran- (0) (k) tees, which can be used in system certification. In this equation we assume Θi = Θi and Θi = (k 1) (k) Θi +Ni − δp + δp. Also, Ni is given by that value of Ni (k) (k 1) References for which Ni = Ni − . We now give equations to com- (k)0 pute the two unknown quantities, Ij and Ni in the above [1] Green Hills Software, ARINC 653 partition scheduler. In equation. www.ghs.com/products/safety critical/arinc653.html. ( l Rj m j Rj k Nj + 1 Π = Π 9 Ij = 1 1 Liu and Layland have given response time based schedulability condi- Nj Otherwise tions for this case [18].

10 [2] Windriver, platform for ARINC 653. In [22] H. Ramaprasad and F. Mueller. Tightening the bounds on fea- www.windriver.com/products/platforms/safety critical/. sible preemption points. In RTSS, 2006. [3] Avionics application software standard interface: Part 1 - re- [23] I. Shin and I. Lee. Periodic resource model for compositional quired services (arinc specification 653-2). Technical report, real-time guarantees. In RTSS, 2003. Avionics Electronic Engineering Committee (ARINC), March [24] K. Tindell. Adding time-offsets to schedulability analysis. 2006. Technical Report: YCS 221, Dept. of Computer Science, Uni- [4] L. Almeida and P. Pedreiras. Scheduling within temporal par- versity of York, York, England, January 1994. titions: response-time analysis and server design. In EMSOFT, [25] K. Tindell and J. Clark. Holistic schedulability analysis for 2004. distributed hard real-time systems. Microprocessing and Mi- [5] M. Behnam, I. Shin, T. Nolte, and M. Nolin. SIRAP: A syn- croprogramming, 40:171–134, 1994. chronization protocol for hierarchical resource sharing in real- time open systems. In EMSOFT, 2007. [6] J. V. Busquets-Mataix, J. J. Serrano, R. Ors, P. Gil, and A. Wellings. Using harmonic task-sets to increase the schedu- lable utilization of cache-based preemptive real-time systems. In RTCSA, 1996. [7] R. I. Davis and A. Burns. Hierarchical fixed priority pre- emptive scheduling. In RTSS, 2005. [8] R. I. Davis and A. Burns. Resource sharing in hierarchical fixed priority pre-emptive systems. In RTSS, 2006. [9] A. Easwaran, M. Anand, and I. Lee. Compositional analysis framework using edp resource models. In RTSS, 2007. [10] A. Easwaran, I. Shin, I. Lee, and O. Sokolsky. Bounding pre- emptions under EDF and RM schedulers. Technical Report MS–CIS–06–06, University of Pennsylvania, USA, 2006. [11] X. Feng and A. Mok. A model of hierarchical real-time virtual resources. In RTSS, 2002. [12] N. Fisher, M. Bertogna, and S. Baruah. The design of an edf- scheduled resource-sharing open environment. In RTSS, 2007. [13] J. Goossens. Scheduling of Hard Real-Time Periodic Systems with Various Kinds of Deadline and Offset Constraints. PhD thesis, Universit Libre de Bruxelles, 1999. [14] L. Kinnan, J. Wlad, and P. Rogers. Porting applications to an arinc 653 compliant ima platform using vxworks as an exam- ple. In Proceedings of the 23rd Digital Avionics Systems Con- ference, 2004. [15] Y.-H. Lee, D. Kim, M. Younis, and J. Zhou. Scheduling tool and algorithm for integrated modular avionics systems. In Pro- ceedings of 19th Digital Avionics Systems Conference, 2000. [16] J. Leung and J. Whitehead. On the complexity of fixed-priority scheduling of periodic real-time tasks. Performance Evalua- tion, pages 237–250, 1982. [17] G. Lipari and E. Bini. Resource partitioning among real-time applications. In ECRTS, 2003. [18] C. Liu and J. Layland. Scheduling algorithms for multi- programming in a hard-real-time environment. Journal of the ACM, 20(1):46 – 61, 1973. [19] S. Matic and T. A. Henzinger. Trading end-to-end latency for composability. In RTSS, 2005. [20] A. K. Mok, D.-C. Tsou, and R. C. M. de Rooij. The msp.rtl real-time scheduler synthesis tool. In RTSS, 1996. [21] M. D. Natale and J. Stankovic. Dynamic end-to-end guarantees in distributed real-time systems. In RTSS, 1994.

11 A ARINC-653 workloads

A.1 Workloads with non-zero offset

Workload 1:

Workload 2:

A.2 Workloads with non-zero jitter

Workload 3:

12

Workload 4:

Workload 5:

13

Workload 6:

Workload 7:

14