4 Real-Time Scheduling Policies

Total Page:16

File Type:pdf, Size:1020Kb

4 Real-Time Scheduling Policies

Performance Evaluation on a Real-Time Database

Suhee Kim1 Sang H. Son2 John A. Stankovic2 1Hoseo University, Korea, [email protected] 2 University of Virginia, {son, stankovic}@cs.virginia.edu March 6, 2002

Abstract We have implemented an object-oriented real-time database system called BeeHive. Using BeeHive, the performance of two data-deadline cognizant scheduling policies, called EDDF and EDF-DC, and a baseline EDF policy all with/without admission control are evaluated through extensive experiments. The ranges where data-deadline cognizant scheduling policies are effective and where admission control plays a significant role are identified. These results represent one of the few sets of experimental results found for an implemented real- time database; almost all previous results are via simulation.

1. Introduction As more and more applications interact in real-time, the need for real-time databases is growing. For example, airplane tracking applications require timely updating of airplane locations, and stock trading requires timely update of stock prices. These applications also have the need for transaction semantics, hence real-time databases are good choices. Most real-time database research has evaluated their new ideas via simulation. Also, in much of the past work the only real-time issue addressed is that transactions have deadlines [1,2,4,5,6,7,8,9,12,14,16]. In general, though, transactions that process real-time data must use timely and relatively consistent data in order to have correct results. Data read by a transaction must be valid when the transaction completes, which leads to another constraint on completion time, in addition to a transaction's deadline. This constraint is referred to as data- deadline. In this paper we describe the implementation of an object-oriented real-time database system, called BeeHive, which embodies a more complete model of a real-time database than those systems which only consider transaction deadlines. The BeeHive architecture includes sensor transactions with periodic updates, user transactions with deadlines, sensor data with validity intervals after which the data is not usable, a multiprocessor implementation, and a software architecture with and without admission control. In addition, we evaluate three real-time scheduling policies and admission control on this system rather than via simulation. In this way, actual overheads and OS system costs are accounted for and are shown to affect the results. Our performance results also show that in systems without admission control that the policies that explicitly address data validity intervals perform extremely well. However, when admission control (which accounts for both cpu and I/O) is used, then there is less need for data oriented time cognizant policies. Overall, the ranges where the two data-deadline cognizant scheduling policies are effective and where the admission controller plays a role are identified. The remainder of this paper organized as follows. Section 2 presents the BeeHive system architecture. Section 3 describes the transaction model that is considered in the study. Section 4 outlines the transaction scheduling policies that have been implemented and evaluated. Section 5 describes the admission control model. Section 6 analyzes the results of the experiments. Section 7 summarizes the conclusions of the paper.

2. BeeHive System Architecture BeeHive is a real-time database system which consists of the BeeHive database server, a transaction thread pool, and the BeeKeeper resource manager (see Figure 1). The BeeHive database server is extended from a database called the Scalable Heterogeneous Object Repository (SHORE). SHORE is a database system that was developed by the Computer Science Department at the University of Wisconsin. It provides several core

1 database functionalities including ACID transactions, concurrency control, and disk IO management. The main extensions we added include support for real-time transactions, data validity intervals, and admission control. In BeeHive, client applications send a transaction request to the database through the BeeKeeper system, which in turn communicates with the BeeHive database server. Based on the semantic requirements of the transaction request, the BeeKeeper first determines whether the transaction request is to be admitted or not and then how the admitted transaction is to be scheduled. Once the transaction starts running, database accesses are supported through remote procedure calls (RPC) to the BeeHive database server.

Figure 1 BeeHive Architecture

2.1 BeeHive Database Server The BeeHive database server consists of the SHORE database server, a listener thread, an IO transaction thread pool, a real-time IO scheduler, and a cleaner task (see Figure 2). The listener thread accepts client RPC connections and allocates a transaction thread from the pool for each connecting client. Each transaction thread in the thread pool processes RPC requests for transaction operation from its associated client, and passes the necessary information to the SHORE server to perform the work. The SHORE server provides functions such as create, read, write, delete, commit and abort. The real-time scheduler module prioritizes the IO transaction threads in the IO transaction pool periodically based on the specified scheduling policies. The cleaner module is in charge of freeing the unused memory resources.

2 Figure 2 BeeHive Database Server

2.2 BeeKeeper Resource Manager The BeeKeeper is the resource management process. This process executes on processor 1 of the multiprocessor; the database server executes on processor 2. The following are the main components of the BeeKeeper (see Figure 3).  Service Mapper: The Service Mapper performs the mapping of qualitative resource requests into quantitative requests for physical resources. The service mapper generates a uniform internal representation of the multiple (service-dependent) requests from applications. The Mapper passes the transaction to the Admission Controller.  Admission Controller: The Admission Controller performs the tests to determine if BeeHive has sufficient resources to support the service requirements of an incoming transaction without compromising the service guarantees made to currently active applications. The Admission Controller admits the incoming transaction based on CPU and IO utilizations of the admitted transactions and system response times of the completed transactions.  Real-Time Scheduler: The scheduler is responsible for managing the priority of the transactions’ threads in transaction thread pool. The scheduler prioritizes the transaction threads in the transaction pool periodically based on the specified scheduling policies. Note that these threads run in the BeeKeeper process and are distinct from the threads found in the BeeHive database server process.  Object Manager: The Object Manager provides the interface between the BeeKeeper and the BeeHive database. The Object Manager provides support for retrieving, adding, removing, and updating objects in the BeeHive database. All transaction requests are processed by the Object Manager. The Object Manager uses a remote procedure call (RPC) to connect to the server. A transaction is encapsulated in a BeeHive object.  Monitor: The Monitor calculates various statistics such as global resource utilization and system real-time performance, and updates the system state table with this information.

3. Transaction Model

3 BeeHive uses a firm real-time database system in which transactions that have missed their deadlines add no value to the system and are aborted. There are two kinds of transactions: user transactions and sensor transactions. User transactions access both temporal and non-temporal data stored in the database and perform some computations while sensor transactions are responsible for updating temporal objects periodically. All transactions (user and sensor transactions) are predefined. Some attributes of temporal object (X) and transaction (T) that are useful in this study are defined in Table 1.

Figure 3 BeeKeeper Resource Manager Table 1 Attributes of Temporal Objects and Transactions Notation Description Notation Description AVI(X) Absolute data validity interval of X EET(T) Expected execution time of T

AVIb(X) Beginning of AVI(X) ERT(T) Execution response time of T

AVIe(X) End of AVI(X) CPUtime(T) CPU time required to execute T A(T) Arrival time of T IOtime(T) IO time required to execute T DL(T ) Deadline of T UP(T) Update period of a sensor transaction T RDL(T) Relative deadline of T LAVI Length of absolute data validity interval

DDLt(T) Data-deadline of T at time t SF Deadline slack factor of user transactions

PRt(T) Priority of T at time t

3.1 Sensor Transactions Sensor transactions are periodic and write-only transactions that update temporal objects. Each temporal object in the database has a corresponding sensor transaction to update it periodically. Sensor transaction design varies by its deadline assignment principle and scheduling priority. There are two traditional approaches for maintaining data validity, namely one-one and half-half principles. The one-one principle was used in the simulation study by Haritsa et. al.[7] and Huang[8]. The one-one principle sets the period and the relative deadline of a sensor transaction equal to the length of the validity interval of its temporal object. Since the gap of execution of two consecutive instances of a transaction can be larger than the validity interval of the corresponding object in the one-one principle, the data can be invalid. In the half-half principle, the period and the relative deadline of a sensor transaction are set to one-half of the length of the absolute

4 validity interval of the data[21]. The half-half policy is used in our study because under this principle temporal data consistency can be supported if the corresponding sensor transaction meets its deadline.

3.2 User Transactions User transactions access temporal/non-temporal objects in the database and perform computations. Two sets of user transactions were developed for the experiments. There are 8 kinds of transactions in each set. Note that the database contains airplane and airport objects and the transactions perform tasks such as reading the locations of planes, updating those locations, and finding all planes within an area of the airport. The relative deadline RDL of a user transaction T is set using the following formula: RDL(T) = SF * EET(T). Some important attributes of a transaction include deadline DL, estimated execution time EET, CPU time required CPUtime, IO time required IOtime and arrival time A to the system. Table 2 shows the settings for sensor transactions and user transactions.

Table 2 Sensor Transactions and User Transactions 1 Update frequency UP(T ) = 2 * LAVI of object to be updated, periodic Sensor 1 Transactions Relative deadline RDL(T ) = 2 * LAVI of object to be updated. Deadline DL(T ) = A(T )+ RDL (T ) User Relative deadline RDL (T ) = SF * EET(T ) Transactions Deadline DL(T ) = A(T )+ RDL (T )

4. Real-Time Scheduling Policies The scheduling algorithm should maximize the number of user transactions that meet their deadlines while maintaining temporal consistency. Sensor transactions are scheduled by earliest deadline first policy, and they have higher priority than user transactions. Three real-time scheduling policies for user transactions are as follows.

 Earliest Deadline First (EDF) This is the traditional policy where the priority of a transaction is its deadline. This policy is not cognizant of data-deadline in scheduling. Transactions are aborted at the commit time if any data object it read is invalid.

PRt(T) = DL(T)

 Earliest Data-Deadline First (EDDF) This policy assigns the minimum of the data-deadline and the deadline of a transaction T to be the priority of the transaction.

PRt(T) = min(DL(T), DDLt(T))

This policy is cognizant of data-deadline in scheduling. Whenever DDLt(T) changes, it is reflected in the scheduling, depending on the values of DL(T) and DDLt(T). The data-deadline of a transaction T at time t, DDLt(T), is defined in a pseudo code as follows.

Initially DDLt(T) is undefined. When the transaction T reads a temporal object X at time t, if DDLt(T) is undefined, DDLt(T) = AVIe(X) else new DDLt(T) = min(old DDLt(T) , AVIe(X)).

5  Earliest Deadline First with Data Validity Check (EDF-DC) This policy is the same as the EDF policy, except that when the data-deadline of the transaction T is invalid it is aborted. In the EDF policy, the data-deadline of a transaction T is checked at commit time and the transaction is aborted if it is invalid. In the experiments, the priorities of user transactions are assigned using these three policies and the performance of the two data-deadline cognizant policies are compared with that of the baseline EDF policy.

5. Admission Control When a real-time database system is overloaded, many transactions may miss their deadlines. If these same transactions were not admitted to the system in the first place, the limited resources in the system would not have been wasted on these transactions. The experiments are further extended to use admission control and the ranges where admission control is effective are identified.

5.1 Admission Control Model We have developed an admission controller for user transactions, which considers both resource utilizations and system response times. All transactions are predefined. For each transaction the expected execution time (EET = CPUtime + IOtime) is pre-analyzed. By calculating CPU and IO utilizations for transactions in the system, the admission controller first determines whether BeeHive can support the service requirement of an incoming user transaction. Once the transaction is admitted into the system, it is placed in the EDF queue to wait for execution. When its turn comes for execution, the deadline of the transaction is compared with the average execution response time of its type. If it has enough time to meet the deadline, it starts execution. If not, it leaves the system and is considered as to have missed its deadline.

 Step 1: Admission Control by CPU and IO Utilizations CPU utilization and IO utilization of a transaction T in the system are estimated as follows. If there are n transactions in the system and new transaction Tnew arrives at time t, then n CPUtime(T ) CPUtime(T ) Percentage of CPU utilization = (  i + new ) * 100 i1 DL(Ti )  t DL(Tnew )  t and n IOtime(T ) IOtime(T ) Percentage of IO utilization = (  i + new ) *100 i1 DL(Ti )  t DL(Tnew )  t

If the percentage of CPU utilization < CPU-threshold and the percentage of IO utilization < IO- threshold, then admit new transaction. If not, reject it.

CPU-threshold and IO-threshold are constants to be chosen for the thresholds of CPU and IO utilizations in each experiment. To be more precise, we need to estimate the remaining CPU time and the remaining IO time of the transaction T at time t. These estimations are expensive and difficult to measure as we found via the implementation. Instead, we simply use the original CPUtime and IOtime of the transaction T and do not try to account for transactions’ remaining requirements. This works and does not require much overhead.

 Step 2: Admission Control by Execution Response Time System response time of a transaction is the sum of the waiting time and the execution response time of the transaction. If the slack time from its deadline is less than its expected execution time, it is highly likely to miss its deadline after wasting resources in the system. In this study, when its turn comes to execute at time t, the slack time from its deadline is compared with the average execution response time of its type. Let ERT(T) denote the execution response time of the transaction T. If n transactions of this type have completed their executions at time t, then

6 n  ERT (Ti ) the average execution response time is i1 . n n  ERT (Ti ) If (DL(T) – t) > c * i1 (where c is a constant), then start execution. Otherwise, the n transaction is aborted. The constant c is chosen by tuning.

5.2 Thresholds of the Admission Controller To calculate the percentages of CPU utilization and IO utilization in Step 1, Beekeeper needs to keep track of both user transactions and sensor transactions in the system. In preliminary tests, it was observed that the calculations of these utilizations for sensor transactions slow down the system significantly and many sensor transactions miss their deadlines. Consequently many user transactions read stale data and miss their data- deadlines. To improve overall system performance, the calculations of the percentages of CPU utilization and IO utilization for sensor transactions are excluded. A fixed pair of values for CPU-threshold and IO-threshold may not work well for various workloads. These values are tuned for each workload. Given a workload, a few pairs of values for CPU-threshold and IO-threshold are tested, and a good pair of values is chosen for the actual experiments. The constant c in Step 2 was chosen to be 0.9 in the experiments.

6. Performance Evaluation This section presents the experimental setup and the assumptions used in the experiments. The workload model for transactions (user transactions and sensor transactions) is introduced. Using diverse workloads, several experiments are performed with/without admission control. We evaluate the performances of the experiments and identify the ranges where data-deadline cognizant scheduling policies are effective and where admission control plays a role. The primary performance metric used in the experiments is miss ratio, i.e., the miss ratio of user transactions that miss their deadlines, which is a traditional metric used to evaluate performance in real-time database systems. Nmiss miss ratio = Nmiss  Nsucceed where Nmiss is the number of transactions that miss deadlines and Nsucceed is the number of transactions that succeed. When admission control is on, miss ratio can be rewritten as follows:

Nmiss  Nrejected miss ratio = Nmiss  Nrejected  Nsucceed

where Nrejected denote the number of transactions rejected by the admission controller. Our experiment runs on a 2*300MHZ UltraSparcII machine. Table 3 shows the setting for the platform and the setting of disk-resident BeeHive database.

6.1 Workload Model A user transaction workload generator has a fixed number of BeeHive clients to generate transaction requests to the BeeHive Server. There are three parameters in the user transaction workload generator, i.e., the total number of transactions, the number of BeeHive clients, and the transaction arrival rate. In each workload, the total number of transactions and the number of BeeHive clients are fixed. The user transaction workloads are varied by: user transaction set, deadline slack factor and transaction arrival rate. The inter-arrival time between

7 arrivals/client is exponential. The request frequencies of 8 transaction types in each set are uniformly distributed between 1 and 8. Table 3 System Platform and Database Configuration CPU 2*300MHz UltraSPARCII System Platform OS Solaris 5.6 Resident Disk BufPoolSize 800Kbytes Database Configuration PageSize 8K BufManagement Steal/No Force Logging Policy Redo/Undo

Each temporal object in the database has a corresponding sensor transaction to update it periodically. In this study, the half-half policy [18] is used for deadline and update period assignments of sensor transactions. User transactions may read stale data if temporal objects are not updated within absolute validity intervals and miss data- deadline. To keep the data in a fresh state, the higher priorities are assigned to sensor transactions than user transactions. Sensor transactions themselves are scheduled by the EDF policy. Table 4 shows the setting for user transaction and sensor transaction workloads. In each workload, the lengths of absolute validity intervals of 4 temporal objects in BeeHive database are set to a same value. Hence, the relative deadlines and the update periods of 4 sensor transaction types are the same.

Table 4 Workload Model Number of transaction types 8 Transaction length varied by transactions in each set Deadline slack factor varied (25, 40) User Deadline DL(T) = A(T) + SF * EET(T) transaction Total number of transaction requests 8000 workload Request frequencies uniformly distributed between 1 and 8 Transaction arrival rate/sec varied (24~80), in increments of 4 Inter-arrival time exponentially distributed Scheduling policy EDF/ EDDF /EDF-DC Number of transaction types 4 Length of absolute data validity interval Varied (0.3sec, 0.5sec, 1.0sec, 1.5sec) Sensor 1 Update period UP(T)= 2 *LAVI of object to be updated, periodic transaction 1 workload Relative deadline RDL(T)= 2 *LAVI of object to be updated. Deadline DL(T) = A(T) + RDL(T) EDF within sensor transactions, always higher Scheduling policy priority than user transactions

6.2 Workload Sets To perform diverse experiments, two sets of user transactions were developed. Each set consists of 8 transaction types. Each transaction is an actual transaction that accesses various airplane objects from the database. Set 1 uses CPU cycles and IO time with the ratio of 3.0 to 7.0 and transaction lengths are in the range of [0.0171sec, 0.0623sec]. Set 2 uses CPU cycles and IO time with the ratio of 5.5 to 4.5 and transaction lengths are in the range of [0.0101sec, 0.0299sec]. Set 1 has longer transactions and access more temporal objects than Set 2. These transaction classes are summarized in Table 5. To measure the expected execution time of a transaction, an average execution time was obtained by repeating its execution 20 times. Times for data access and some computations in a transaction were measured similarly. The expected execution times of 8 transaction

8 types in each set and the relative deadlines with different slack factors are shown in Table 6. By varying user transaction set, slack factor and absolute validity interval, several workloads were generated. Experiments for different scheduling policies using these workloads were performed with/without admission controller. The workloads used in the experiments are summarized in Table 7. We evaluate the performances of the three scheduling policies in each experiment. By comparing the results of different sets among the experiments, we determine the effects of the admission controller, absolute data validity interval, slack factor and the difference between IO and CPU bound sets.

Table 5 Summary for 2 Sets of User Transactions Transaction CPU time IO time CPU time:IO CPU time:temp. IO : length (sec) (sec) (sec) time non-temp.IO Set 1 [0.0171, 0.0623] [0.0054, 0.0187] [0.0117, 0.0436] 30:70 30:16:54 Set 2 [0.0101, 0.0299] [0.0053, 0.0163] [0.0047, 0.0154] 55:45 55:13:32

Table 6 Expected Execution Times of Transactions in 2 Sets and Relative Deadlines Transaction type 1 2 3 4 5 6 7 8 Time (sec) Expected execution times in Set 1 0.0171 0.0180 0.0280 0.0360 0.0430 0.0440 0.0550 0.0620 Expected execution times in Set 2 0.0101 0.0133 0.0168 0.0218 0.0227 0.0240 0.0289 0.0299 Relative deadlines using Set 1, SF 25 0.4275 0.4500 0.7000 0.9000 1.0750 1.1000 1.3750 1.5500 Relative deadlines using Set 1, SF 40 0.6840 0.7200 1.1200 1.4400 1.7200 1.7600 2.2000 2.4800 Relative deadlines using Set 2, SF 25 0.2525 0.3325 0.4200 0.5450 0.5675 0.6000 0.7225 0.7475

Table 7 Workload Sets for Experiments User Transaction Slack LAVI of Admission Experiment Set factor 4 temporal objects controller Experiments 1-1/1-2 Set 1 25 0.5sec off/on Experiments 1-3/1-4 Set 1 25 1.0sec off/on Experiments 1-5/1-6 Set 1 25 1.5sec off/on Experiments 1-7/1-8 Set 1 40 0.5sec off/ on Experiments 1-9/1-10 Set 1 40 1.0sec off/on Experiments 1-11/1-12 Set 1 40 1.5sec off/on Experiments 2-1/2-2 Set 2 25 0.3sec off/on Experiments 2-3/ 2-4 Set 2 25 0.6sec off/ on

6.3 Experiments Each experiment is repeated four times. To understand the performance results and the effects of variables fully, several statistics were collected in addition to miss ratio. A description of representative statistics is shown in Table 8. In these experiments, 95% confidence intervals have been obtained whose width are less than 8% of the point estimate of miss ratio.

6.3.1 Experiments using Set 1 and Slack Factor of 25 Using Set 1 and slack factor of 25, three workloads were generated by varying the data validity interval: 0.5sec, 1.0sec and 1.5sec. Using each workload set, two experiments with/without admission control were performed.

1) Experiment 1-1

9 The absolute data validity interval (LAVI) of the workload used in this experiment is 0.5sec. The experiment was performed without the admission controller (AC). Figure 4 shows the performance results of EDF, EDDF and EDF-DC policies. At the medium arrival rates, there are significant performance improvements of EDDF and EDF-DC over EDF. The maximum improvements of EDDF and EDF-DC over EDF are 36.17% and 31.44%, respectively, which occurs at the arrival rate of 36 transactions/sec which, from now on, we designate simply as 36/sec.

Table 8 Description of Representative Statistics in the Experiments Notation Description Notation Description Number of user transactions Smiss/total number of sensor transactions Dmiss Smiss ratio that missed deadlines executed Number of user transactions Number of user transactions rejected in Step1 of DDmiss Not admitted that missed data-deadlines the admission controller Dmiss Dmiss/total number of user Number of user transactions rejected in Step2 of Not executed ratio transactions the admission controller DDmiss DDmiss/total number of user Elapsed time from arrival time to the start time t ratio transactions Wait ratio of of its execution / relative data-deadline: Number of sensor T t  A(T ) Smiss transactions that missed deadlines DL(T )  A(T)

1.0 0.9 0.8 0.7 o

i 0.6 t a r

0.5 s s i 0.4 m 0.3 EDF 0.2 EDDF 0.1 EDF_DC 0.0 24 28 32 36 40 44 48 52 56 60 64 68 Arrival rate/sec Figure 4 Performances of EDF, EDDF and EDF-DC policies without AC (Set 1, SF:25, LAVI:0.5sec, NAC)

Over the entire range of arrival rates [24/sec, 68/sec], the performance of EDDF is the best, EDF-DC is in the middle and EDF is the worst. The performance differences between the three policies at the arrival rate of 24/sec is negligible. The gaps between EDDF and EDF and between EDF-DC and EDF increase as the arrival rate increases in the range [28/sec, 36/sec]. Gradually the gaps decrease in the range [40/sec, 68/sec]. The representative statistics of the EDF policy from the experiment are shown in Table 9. At the arrival rate of 24/sec, the system response times of user transactions are much less than their relative deadlines (see Table 7) and most user transactions had enough time to finish within their deadlines. Consequently, very few transactions missed their deadlines. Also, the execution response times of user transactions are much smaller than the LAVI of 0.5sec and only 0.02% of sensor transactions missed their deadlines. At this arrival rate, data-deadline misses of an average of 114.5 mainly contribute to the miss ratio. With such a light load, different scheduling algorithms do not result in significant performance differences. As arrival rate increases, more transactions miss their deadlines or data-deadlines. For example, for EDF, at the arrival rate of 28/sec, the average wait ratio of user transactions is 0.2%. On the average 15.50 transactions missed their deadlines and 890.25 transactions missed their data-deadlines. Some transactions missed both their

10 deadlines and their data-deadlines. As a result, the miss ratio is 11.13%. At the arrival rate of 36/sec, the average wait ratio is 35.32%. On the average 2368.25 transactions waited in the wait queue for a long time, did not have enough time to finish their executions, and missed their deadlines. The miss ratio, the deadline miss ratio and the data-deadline miss ratio are 68.05%, 29.60% and 59.14%, respectively. Over the arrival rate range of [28/sec, 36/sec], the miss ratio increases very fast. This is a period of transition from shorter waiting times to longer waiting times for transactions. Table 9 Statistics of EDF (Set 1, SF:25, LAVI:0.5sec, NAC) Arrival Ave. wait Dmiss DDmiss miss Smiss rate/se Dmiss DDmiss Smiss ratio ratio ratio ratio ratio c 24 0.0005 0.25 114.50 0.0000 0.0143 0.0143 1.50 0.0002 28 0.0020 15.50 890.25 0.0019 0.1113 0.1113 5.25 0.0008 32 0.0839 421.00 3390.25 0.0526 0.4238 0.4310 12.25 0.0019 36 0.3532 2368.25 4731.50 0.2960 0.5914 0.6805 27.25 0.0052 40 0.4920 3983.00 3923.25 0.4979 0.4904 0.7165 17.50 0.0036 44 0.5537 4845.00 3293.75 0.6056 0.4117 0.7584 21.50 0.0047 48 0.6121 5514.75 2542.25 0.6893 0.3178 0.7854 15.75 0.0038 52 0.6471 5991.25 2101.00 0.7489 0.2626 0.8241 22.00 0.0051 56 0.6877 6372.50 1653.50 0.7966 0.2067 0.8563 22.50 0.0059 60 0.7033 6554.00 1756.50 0.8193 0.2196 0.8902 53.50 0.0174 64 0.6991 6588.00 2311.00 0.8235 0.2889 0.9225 95.25 0.0420 68 0.7188 6703.25 2469.00 0.8379 0.3086 0.9422 113.75 0.0515

The improvement pattern of EDDF over EDF is similar to that of EDF-DC over EDF, keeping the latter lower than the former. The miss ratio differences of EDF from EDDF are in the range [0.0018, 0.3617] and the differences of EDF from EDF-DC are in the range [0.0004, 0.3144]. As arrival rate increases from 24/sec to 36/sec, the improvements are increasing rapidly. From arrival rates from 36/sec to 48/sec the improvements are decreasing fast. At the arrival rate of 24/sec, overall performances of the three scheduling policies are very good. For the three policies, the wait ratios are negligible and the average execution response times are much less than the LAVI of 0.5sec. So, very few transactions missed their deadlines and a few transactions missed their data- deadlines. The data-deadline misses mainly contribute to the miss ratios. At this arrival rate, there are no significant differences among the three policies. At medium arrival rates, data-deadline cognizant scheduling policies are very effective. In the EDF policy, there are many transactions that missed data-deadline without missing their deadlines. In the EDDF policy, the priorities of such transactions are set to their data-deadlines and they are rescheduled during their executions. Some of them may meet their deadlines. In the EDF-DC policy, when data-deadlines of such transactions expire, they are aborted earlier than in the EDF policy without wasting resources in the system. At high arrival rates, most transactions wait in the wait queue too long and their deadlines expire earlier than their data-deadlines. Thus it is difficult for data-deadline cognizant scheduling policies to take advantage of their semantics. Performance of the three scheduling policies are not good, but there are still some improvements of EDDF and EDF-DC over EDF. Basically, at very high arrival rates there is overload in the system and thrashing. This implies that an AC would be helpful.

2) Experiment 1-2 The workload for this experiment is same as that of Experiment 1-1, but with AC. We choose a pair of values of CPU-threshold and IO-threshold as follows. The performances with several candidate pairs of thresholds are compared with each other and with the performances without AC for various arrival rates. Then, the pair of values showing the best performance is chosen for the experiment. In this experiment, both CPU- threshold and IO-threshold were chosen to be 40%. Note that to estimate CPU and IO utilizations, sensor transactions are not considered. CPUtime and IOtime of a user transaction instead of the remaining CPU time

11 and the remaining IO time at time t are used for improving the speed of computation and overall performance. With LAVI of 0.5sec, the update periods of 4 sensor transactions are 0.25sec and 16 sensor transactions arrive per second. It turns out that in most cases utilizations are underestimated, compared to actual utilizations. Figure 5 shows the performance results of EDF, EDDF and EDF-DC policies with AC. The maximum performance improvements of EDDF and EDF-DC are 4.39% and 3.62%, respectively. Over the entire range of arrival rates, the performances of EDDF and EDF-DC are only slightly better than that of EDF. When AC is on, it seems that consideration of data-deadline has significantly less impact on the performance.

1.0 0.9 0.8 0.7 o

i 0.6 t a r

0.5 s s

i 0.4

m EDF 0.3 0.2 EDDF 0.1 EDF_DC 0.0 24 28 32 36 40 44 48 52 56 60 64 68 Arrival rate/sec Figure 5 Performances of EDF, EDDF and EDF-DC policies with AC (Set 1, SF:25, LAVI:0.5sec, AC40-40)

The representative statistics of the EDF policy from this experiment are shown in Table 10. As arrival rate increases, more transactions are rejected by AC. It keeps the system from overload with negligible waiting times. As the arrival rate increases, the execution response times of transactions increase and gradually become significant while the average wait ratio increases extremely slowly. This indicates that among the admitted transactions, there are many transactions that missed their data-deadlines without missing their deadlines. The data-deadline miss ratio keeps increasing up to the arrival rate of 48/sec and then it is decreases, but fluctuating a little bit as more transactions are rejected by AC. Overall miss ratio increases very smoothly. Due to space limitations, the statistics for the EDDF and EDF-DC policies are not shown. Their trends are very similar. When AC is on, the improvements of EDDF and EDF-DC over EDF are in the range of [2.65%, 4.39%] and [1.45%, 3.65%], respectively, over the ranges from 36/sec to 68/sec. There are very small numbers of sensor transaction misses in the three policies. Over the entire range, the largest sensor miss ratios in EDF, EDDF and EDF-DC are 0.18%, 0.36% and 0.33%, respectively.

Table 10 Statistics of EDF (Set 1, SF:25, LAVI:0.5sec, AC40-40) Arrival Not Ave. wait Dmiss DDmiss Smiss Dmiss Ddmiss miss ratio Smiss rate/sec admitted ratio ratio ratio ratio 24 28 0.0005 0.00 64.00 0.0000 0.0080 0.0115 0.00 0.0000 28 148 0.0006 0.00 265.75 0.0000 0.0332 0.0517 0.25 0.0000 32 431 0.0006 0.00 598.75 0.0000 0.0748 0.1287 1.75 0.0003 36 934 0.0007 0.00 973.75 0.0000 0.1217 0.2384 1.50 0.0003 40 1529 0.0008 0.50 1274.75 0.0001 0.1593 0.3505 1.75 0.0004 44 2052 0.0009 0.25 1441.25 0.0000 0.1802 0.4367 5.75 0.0013 48 2542 0.0009 0.00 1515.75 0.0000 0.1895 0.5072 7.75 0.0018 52 2990 0.0010 0.25 1515.75 0.0000 0.1895 0.5632 6.25 0.0015 56 3337 0.0010 0.50 1448.25 0.0001 0.1810 0.5982 6.50 0.0016 60 3709 0.0011 0.50 1471.50 0.0001 0.1839 0.6476 5.25 0.0014 64 3957 0.0010 0.00 1391.75 0.0000 0.1740 0.6686 6.25 0.0017

12 68 4191 0.0011 0.00 1350.25 0.0000 0.1688 0.6927 4.75 0.0013

From Experiments 1-1 and 1-2, we can compare the performances of EDF, EDDF and EDF-DC with AC over without AC. For the convenience of comparisons, Figures 4 and 5 are combined together to produce Figure 6 where EDF-AC and EDF denote EDF policies with AC on and off, respectively. Others are defined similarly. 1.0 0.9 0.8 0.7 o

i 0.6 t a r 0.5 EDF s s i 0.4 EDF_AC m 0.3 EDDF EDDF_AC 0.2 EDF_DC 0.1 EDF_DC_AC 0.0 24 28 32 36 40 44 48 52 56 60 64 68 Arrival rate/sec Figure 6 Performances of EDF, EDDF and EDF-DC policies with/without AC (Set 1, SF:25, LAVI:0.5sec, AC40-40)

For the three policies without AC, as arrival rate increases the wait ratios and the execution response times become significant. At medium and high arrival rates, large portions of transactions miss their deadlines or data- deadlines. At low and medium arrival rates, data-deadline misses contribute to the miss ratios more than deadline misses. At high arrival rates, most transactions miss their deadlines because of short slack time in their deadlines. Since AC rejects more transactions as arrival rate increases, the waiting times of the admitted transactions are kept negligible, i.e., only small portions of the admitted transactions miss their data-deadlines. This improves the performances for the three policies with AC over the ones without AC significantly. For example, at high arrival rates such as 56/sec the admission controller improves the performance by at least 23%.

3) Experiments 1-3 and 1-4 The workload for these experiments was generated using Set 1, SF of 25 and LAVI of 1.0sec. The experiments 1-3 and 1-4 were performed with AC off and on, respectively. Figure 7 shows the performances of EDF, EDDF and EDF-DC with and without AC. For AC, both CPU-threshold and IO-threshold were chosen to be 70%. Without AC, the maximum improvement of 21.55% of EDDF over EDF occurs at the arrival rate of 48/sec and the maximum improvement of 17.83% of EDF-DC over EDF at 40/sec. With AC, the performances of EDDF and EDF-DC are slightly better than that of EDF and the performance improvements are much smaller than those without AC. When AC is off, at the arrival rate of 24/sec, the differences of EDF from EDDF and EDF from EDF-DC are almost negligible. It is evident that EDDF and EDF-DC outperform EDF over the range of arrival rate [32/ sec, 68/sec]. One can easily see that there will be no difference among the performance of the three policies when the arrival rate is less than 24/sec and there will be a breakpoint where the performance differences are negligible at a load higher than 68/sec with similar reasons as discussed for Experiment 1-1. Due to the lack of space, the performance improvements of the three policies with AC over without AC are not shown. The performance improvement of EDF-AC over EDF is significant over the entire range. It increases sharply at the beginning and when the arrival rate is greater than or equal to 40/sec. At the arrival rate of 68/sec, the improvements are in the range [0.29, 0.33].

13 1.0 EDF 0.9 EDF_AC 0.8 EDDF_NAC EDDF_AC 0.7 EDF_DC

o 0.6 EDF_DC_AC i t a r

0.5 s s i 0.4 m 0.3 0.2 0.1 0.0 24 28 32 36 40 44 48 52 56 60 64 68 Arrival rate/sec Figure 7 Performances of EDF, EDDF and EDF-DC with/without AC (Set 1, SF:25, LAVI: 1.0 sec, AC70-70)

4) Experiments 1-5 and 1-6 The workload for these experiments was generated using Set 1, SF 25, LAVI of 1.5 sec. The experiments 1-5 and 1-6 were performed with AC off and on, respectively. For AC, both the CPU-threshold and IO-threshold were chosen to be 80%. Figure 8 shows that the performances of EDF, EDDF and EDF-DC with and without AC. Without AC, overall performance of EDDF is the best, EDF-DC is in the middle and EDF is the worst. The maximum improvements of EDDF and EDF-DC over EDF are 13.70% and 5.37%, which occur at the arrival rate 48/sec, respectively. With AC, the performances of EDF and EDF-DC are almost same and one can hardly distinguish the graph of EDF from the graph of EDF-DC. The performance of EDDF is slightly better than those of EDF and EDF-DC. When AC is off, EDDF and EDF-DC outperform EDF over the range of arrival rate [36/sec, 68/sec] with the maximum improvements 13.70% and 5.37%, respectively. Again, there is no difference among the performances when the arrival rate is less than 24/sec and at a load higher than 68/sec. The performance improvements of the three policies with AC over without AC are increasing gradually as the arrival rate increases and then decreasing a little bit. They are likely to converge to a value at a higher load than 68/sec.

1.0 EDF 0.9 EDF_AC 0.8 EDDF 0.7 EDDF_AC EDF_DC o

i 0.6 t EDF_DC_AC a r

0.5 s s i 0.4 m 0.3 0.2 0.1 0.0 24 28 32 36 40 44 48 52 56 60 64 68 Arrival rate/sec Figure 8 Performances of EDF, EDDF and EDF-DC with/without AC (Set 1, SF:25, LAVI: 1.5 sec, AC80-80)

14 6.3.2 Experiments using Set 1 and Slack Factor of 40 Using Set 1 and SF of 40, three workloads were generated by varying LAVI: 0.5sec, 1.0sec and 1.5sec. Using each workload set, two experiments with/without AC were performed. 1) Experiments 1-7 and 1-8 The LAVI of the workload for these experiments is 0.5sec. For AC, both the IO-threshold and CPU- threshold were chosen to be 40%. The experiments 1-7 and 1-8 were performed with AC off and on, respectively. Figure 9 shows the performance results without AC. At low and medium arrival rates, the performance trend is very similar to that with SF of 25 and LAVI of 0.5sec. At high arrival rates, the performance of EDF with SF of 40 is also very similar to that with SF of 25. But the performances of EDDF and EDF-DC with SF of 40 are better than those with SF of 25. With SF of 40, even if transactions wait for a long time at high arrival rates, they have enough slack time to execute and some of them take advantage of the semantics of EDDF and EDF-DC. At the arrival rate of 68/sec, EDDF and EDF-DC outperform EDF by 9.52% and 5.83%, respectively. When SF is 25, their improvements are 4.39% and 3.71%, respectively.

1.0 0.9 0.8 0.7 o i 0.6 t a r

0.5 s s

i 0.4 m 0.3 EDF 0.2 EDDF 0.1 EDF_DC 0.0 24 28 32 36 40 44 48 52 56 60 64 68 Arrival rate/sec Figure 9 Performances of EDF, EDDF, and EDF-DC without AC (Set1, SF:40, LAVI: 0.5sec)

1.0 0.9 0.8 0.7 o

i 0.6 t a r

0.5 s s i 0.4 EDF m 0.3 EDDF 0.2 EDF_DC 0.1 0.0 24 28 32 36 40 44 48 52 56 60 64 68 Arrival rate/sec Figure 10 Performances of EDF, EDDF and EDF-DC with AC (Set1, SF:40, LAVI: 0.5sec, AC40-40)

Figure 10 shows the performance results with AC. There are significant performance improvement of EDDF and EDF-DC over EDF. But overall performance is worse than those having SF of 25 and LAVI of 0.5sec. To compare the performances with SF of 25 with those with SF of 40, Figures 5 and 9 are combined to produce Figure 11. Clearly, there are large performance differences between the two groups. When AC is on, in both SF of 40 and SF of 25, very few transactions miss their deadlines. But in SF of 40, the average execution response

15 times much larger than those in SF of 25 and a larger portion of the admitted transactions miss their data- deadlines (see Table 11). In EDF, SF of 25 outperforms SF of 40 by 27.53%. This is because the admission controller admits more transactions in SF of 40 than in SF of 25. Generally the slack time of user transactions from their deadlines in SF of 40 are longer than those in SF of 25. In SF of 40, the transactions having longer slack time can be delayed in their executions more since more transactions are in the execution state and this causes a larger average execution response times in the experiment with an SF of 40. Figures 9 and 10 are combined in Figure 12 to see the effect of AC. The performance trends are quite different from those with LAVI of 0.5sec and SF of 25. At the first glance, AC does not improve the performance as much as it did in SF of 25. When AC is on, in both SF of 40 and SF of 25, very few transactions miss their deadlines. But with a SF of 40, a larger portion of the admitted transactions miss their data-deadlines than with a SF of 25 (see Table 11). At medium arrival rates, the performance of EDDF with/without AC are the best, EDF-DC’s are in the middle, and EDF’s is the worst. At high arrival rates, the three policies with AC tend to be better than those without AC. Even if many transactions are rejected by AC, a large portion of admitted transactions still miss their data-deadlines because of long execution response times compared with the data validity interval. As a result, AC does not play a significant role at this workload.

Table 11 Execution Response Times in EDF Policies with Different Slack Factors (Set 1, LAVI: 0.5sec) Slack factor of 25 Slack factor of 40 Relative deadline range [0.4275sec, 1.5500sec] [0.6840sec, 2.4800sec] Average execution response time [0.0310sec, 0.5814sec] [0.0318sec, 0.9251sec], Admitted 5948 6039 Arrival rate Data deadline misses 1441.00 3681.00 44/sec miss ratio 0.4367 0.7054

1.0 0.9 0.8 0.7 o

i 0.6 t a r

0.5 s

s EDF_S25 i 0.4

m EDF_S40 0.3 EDDF_S25 EDDF_S40 0.2 EDF_DC_S25 EDF_DC_S40 0.1 0.0 24 28 32 36 40 44 48 52 56 60 64 68 Arrival rate/sec Figure 11 Performances of EDF, EDDF and EDF-DC with SFs 25 and 40 (Set1, LAVI: 0.5sec, AC)

16 1.0 0.9 0.8 0.7 o

i 0.6 t a

r EDF 0.5 s EDF_AC s i 0.4 EDDF m 0.3 EDDF_AC 0.2 EDF_DC 0.1 EDF_DC_AC 0.0 24 28 32 36 40 44 48 52 56 60 64 68 Arrival rate/sec Figure 12 Performances of EDF, EDDF, and EDF-DC with/without AC (Set1, SF:40, LAVI: 0.5sec, AC40-40) 2) Experiments 1-9 and 1-10 The data validity interval for the workload used in these experiments is 1.0sec. For AC, both IO-threshold and CPU-threshold were chosen to be 80% and 40%, respectively. The Experiments 1-9 and 1-10 were performed with AC off and on, respectively. Figure 13 shows the performance results. Without AC, at low and medium arrival rates the pattern of the performances is similar to that having SF of 25 and LAVI of 1.0sec. But at high arrival rates, the performances of the three policies having SF of 40 are better than those having SF of 25 (see Figure 7.). At the arrival rate of 68/sec, EDDF and EDF-DC outperform EDF by 8.88% and 1.58%, respectively. In SF of 25, their improvements are 5.66% and 1.60%, respectively. 1.0 EDF 0.9 EDF_AC 0.8 EDDF 0.7

o EDDF_AC i

t 0.6 a

r EDF_DC 0.5 s

s EDF_DC_AC i 0.4 m 0.3 0.2 0.1 0.0 24 28 32 36 40 44 48 52 56 60 64 68 Arrival rate/sec Figure 13 Performances of EDF, EDDF and EDF-DC with/without AC (Set 1, SF:40, LAVI:1.0sec, AC80-40)

With AC, there are some performance differences among the three policies, but overall performance is worse than those having SF of 25 and LAVI of 1.0sec. In the EDF policy, SF of 25 outperforms SF of 40 by 18.34%. With LAVI of 1.0sec, overall performance is better than those of LAVI of 0.5sec. From LAVI of 0.5sec to LAVI of 1.0sec, the portions that missed data-deadlines among the admitted transactions are reduced a lot. At the medium arrival rates, the performance of EDF-AC is worse than those of EDF-DC and EDDF. At the high arrival rates, it is evident that the performance with ACs is better than those without AC.

3) Experiments 1-11 and 1-12 The data validity interval of the workload for these experiments is 1.5sec. For AC, both IO-threshold and CPU-threshold were chosen to be 90% and 40%, respectively. The Experiments 1-11 and 1-12 were performed with AC off and on, respectively. Figure 14 shows the performance results.

17 Without AC, overall performance is slightly better than those having SF of 25 and LAVI of 1.5 sec and the performance patterns are very similar. With AC, the performances for the three policies are almost same. One can hardly distinguish them. The average execution response times in the three policies with AC are much less than those without AC. Hence, the performances with AC are much better than those without AC. At the arrival rate of 68/sec, the performance improvements with AC over without AC are in the range [26.53%, 32.28%]. AC is more effective on this workload than on those with LAVI of 0.5sec and 1.0sec.

4) Effect of Slack Factors Several experiments were performed with slack factors of 25 and 40. When AC is off, overall performance of the three policies in SF of 40 are slightly better than that in SF of 25. But with AC on and LAVIs of 0.5sec and 1.0sec, overall performance of the three policies in SF of 40 are worse than those in SF of 25. With LAVI of 1.5sec, there are no significant performance differences between SF of 25 and SF of 40. 1.0 0.9 EDF EDF_AC 0.8 EDDF 0.7 EDDF_AC EDF_DC o

i 0.6 t EDF_DC_AC a r

0.5 s s i 0.4 m 0.3 0.2 0.1 0.0 24 28 32 36 40 44 48 52 56 60 64 68 Arrival rate/sec Figure 14 Performances of EDF, EDDF and EDF-DC with/without AC (Set 1, SF:40, LAVI:1.5sec, AC80-40)

Compared to the relative deadlines of user transactions having SF of 40, LAVIs of 0.5sec and 1.0sec are relatively small (see Table 6). LAVI of 1.5sec is in the middle of the relative deadlines. Compared to the relative deadlines of user transactions having SF of 25, LAVIs of 0.5sec and 1.0sec are in the middle (see Table 6). LAVI of 1.5sec is relatively high. The admission controller admits more transactions in the tests with a SF of 40 than with a SF of 25. Generally, the slack time of user transactions in SF of 40 are longer than those in SF of 25. In SF of 40, the transactions having longer slack time can be delayed in their executions since more transactions are possibly in the execution state and this causes larger average execution response times. As a result, overall performances in SF of 40 are not better than in SF of 25. ACs are less effective for the workloads with LAVIs of 0.5sec and 1.0sec than in the workload with LAVI of 1.5sec. For admission control, we need to consider slack time for the transactions and LAVI. For the case that LAVI is very short compared to slack time, we probably need to enhance the admission controller to consider slack time and LAVI.

6.3.3 Experiments using Set 2 and Slack Factor of 25 Set 2 has shorter transactions and reads less temporal objects than Set 1. Using Set 2 and slack factor of 25, two workloads were generated by varying the data validity interval: 0.3sec and 0.6sec. With SF of 25, the relative deadlines of the transactions are in the range [0.2525sec, 0.7475sec]. Using each workload set, two experiments with/without AC were performed.

1) Experiments 2-1 and 2-2

18 The data validity interval of this workload is 0.3sec. For AC, both IO-threshold and CPU-threshold were chosen to be 90%. The experiments 2-1 and 2-2 were performed with AC off and on, respectively. Figure 15 shows the performance results. Since this workload reads less temporal objects, consideration of data-deadline does not have a significant impact on the performances with/without AC. At low arrival rates, the performance without AC is slightly better than those with AC. The reason for this is as follows: too many transactions are rejected by AC and (or) there is more computation overhead to execute AC than the benefit of AC. It might be necessary to choose the IO-threshold and CPU-threshold values more carefully. 1.0 0.9 EDF 0.8 EDF_AC 0.7 EDDF EDDF_AC o

i 0.6 t EDF_DC a r 0.5 EDF_DC_AC s s i 0.4 m 0.3 0.2 0.1 0.0 36 40 44 48 52 56 60 64 68 72 76 80 Arrival rate /sec Figure 15 Performances of EDF, EDDF and EDF-DC with/without AC (Set 1, SF:25, LAVI:0.3sec, AC90-90)

At medium and high arrival rates, there are very slight performance differences among the three policies without AC. With AC, the performance differences are negligible. As the arrival rate increases, the performance improvements with AC over without AC are getting very significant and they are in the range [0.3022,0.3868] at the arrival rate 80/sec.

2) Experiments 2-3 and 2-4 The LAVI of this workload is 0.6sec. For AC, the IO-threshold and CPU-threshold were chosen to be 95% and 90%, respectively. The experiments 2-3 and 2-4 were performed with AC off and on, respectively. Figure 16 shows the performance results. The performance patterns are almost the same as those of Experiments 2-1 and 2-2. For the experiments using Set 1, at low arrival rates, the performances without AC are better than those with AC. IO-threshold and CPU-threshold need to be chosen more carefully. At medium and high arrival rates, the performance improvements with AC over without AC are significant. Since the workloads used in the experiments using Set 2 access less temporal objects than the experiments using Set 2, consideration of data-deadline and the difference of the data validity intervals do not have a significant impact on the performances with/without AC.

19 1.0 EDF 0.9 EDF_AC 0.8 EDDF 0.7 EDDF_AC EDF_DC o

i 0.6

t EDF_DC_AC a r

0.5 s s

i 0.4 m 0.3 0.2 0.1 0.0 36 40 44 48 52 56 60 64 68 72 76 80 Arrival rate/sec Figure 16 Performances of EDF, EDDF and EDF-DC with/without AC (Set 1, SF:25, LAVI:0.6sec, AC90-90)

7. Conclusion The question of how to improve system performance while transactions maintain data temporal consistency and meet their deadlines is a challenging problem. In this paper, the traditional scheduling policy EDF and data- deadline cognizant scheduling policies EDDF and EDF-DC were implemented. An admission controller which considers both cpu and IO utilizations and system response times was developed for user transactions. Extensive experiments were performed by varying: user transaction set, slack factor, data validity interval, user transaction arrival rate, and the admission control on or off. The performances of data-deadline cognizant scheduling policies EDDF and EDF-DC are compared to the performance of the baseline EDF policy. The ranges where data-deadline cognizant scheduling policies are effective and where admission control plays a role are identified. In summary, data-deadline cognizant scheduling policies are effective for workloads in which user transactions read many temporal objects and the relative deadlines of most user transactions are longer than the absolute data validity intervals of temporal objects. Data-deadline cognizant scheduling policies improve performances significantly for the medium arrival rates where the system load is neither low nor high. When the system load is very low, the performances of the three policies are very good and the performance differences are negligible. When the system load is high, the waiting times of user transactions are significant and the slack times from deadlines of user transactions are not long enough to complete executions. In such situations, deadline misses mainly contribute to the overall miss ratio, and hence, data-deadline cognizant scheduling policies do not improve the performance significantly. Generally, the performance of EDDF is the best, EDF- DC is in the middle and EDF is the worst. The admission controller is effective for all kinds of workloads. Admission control is especially powerful for the workloads when the absolute data validity intervals of temporal objects are relatively longer than or comparable to the relative deadlines of most user transactions. For such workloads, admission control improves performances significantly for the high arrival rates. For the medium arrival rates, the performance keep improving as the arrival rate increases and then converge to a certain value. When the absolute data validity intervals of temporal objects are relatively shorter than the relative deadlines of most user transactions, admission control works better for medium arrival rates than high arrival rates. The values for CPU-threshold and IO-threshold used by the admission controller need to be selected carefully. By testing several pairs of values for a given workload, good values were chosen and fixed for all arrival rates. In Step 2 of the admission controller, the average execution response time is used to control execution of a transaction. An average of accumulated execution response times from the beginning may not reflect well for recent response times. The admission controller does not work effectively when LAVI is smaller with respect to slack time of user transactions. These results represent one of the first sets of experimental performance comparisons from an actual implementations of scheduling policies in real-time database systems. Future work includes implementation of other scheduling policies, such as new feedback control based scheduling algorithms.

20

References

[1] R. Abbott and H. Garcia-Molina, “Scheduling Real-Time Transactions: A Performance Evaluation,” ACM Transactions on Database Systems, Vol. 17, No. 3, pp. 513-560, September 1992.

[2] B. Adelberg, H. Garcia-Molina and B. Kao, “Applying Update Streams in a Soft Real-Time Database System,” Proceedings of the 1995 ACMSIGMOD, pp. 245 - 256, 1995.

[3] Azer Bestavros and Sue Nagy, “Value-cognizant Admission Control for RTDB Systems,” IEEE 16th Real- Time Systems Symposium, December 1996.

[4] J. Hansson, S.H. Son, J. Stankovic and S. Andler, “Dynamic Transaction Scheduling and Reallocation in Overloaded Real-Time Database Systems,”Proceedings of RTCSA '98, 1998.

[5] J.R. Haritsa, M.J. Carey and M. Livny, “On Being Optimistic about Real-Time Constraints,” Proc. of 9th SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, April, 1990.

[6] J.R. Haritsa, M.J. Carey and M. Livny, “Earliest Dead-line Scheduling for Real-Time Database Systems,” Proceedings of the Real-Time Systems Symposium, pp.232-242, December 1991.

[7] J.R. Haritsa, M.J. Carey and M. Livny, “Data Access Scheduling in Firm Real-Time Database Systems,” The Journal of Real-Time Systems, Vol. 4, No. 3, pp. 203-241, 1992.

[8] J. Huang, J.A. Stankovic, D. Towsley and K. Ramamritham, “Experimental Evaluation of Real-Time Transaction Processing,” Real-Time Systems Symposium, pp. 144-153, December 1989.

[9] J. Huang, J.A. Stankovic, K. Ramamritham, D.Towsley and B. Purimetla, “On Using Priority Inheri- tancein Real-Time Databases,” Special Issue of Real-Time Systems Journal, Vol. 4. No. 3, September 1992.

[10] T. Kuo and A.K.Mok, “SSP: a Semantics-Based Protocol for Real-Time Data Access,” IEEE 24th Real-Time Systems Symposium, December 1993.

[11] T. Kuo and A.K.Mok, “Real-Time Data Semantics and Simility-Based Concurrency Control,” IEEE Transactions on Computers, Vol. 19, No. 11, November 2000.

[12] Y. Lin and S.H. Son, “Concurrency Control in Real-Time Databases by Dynamic Adjustment of Serialization Order,” Proceedings of the Real-Time Systems Symposium, pp. 104-112, December 1990.

[13] S. Nagy, A. Bestavros, “Admission Control for Soft-Deadline Transactions in ACCORD,” In 3 rd IEEE Real-time Technology and Applications Symposium, pages 160--165, Montreal, Canada, June 1997.

[14] H.Pang, M.J. Carey and M.Livny, “Multiclass Query Scheduling in Real-Time Database Systems,” IEEE Transactions on Knowledge and Data Engineering, Vol. 7, No. 4, August 1995.

[15] R.M. Ramamritham, “Real-Time Databases,” Distributed and Parallel Databases 1, pp. 199-226, 1993.

21 [16] R. Sivasankaran, J. Stankovic, D. Towsley, B. Purimetla and K. Ramamritham, “Priority Assignment in Real-Time Active Database,” Then Iinternational Journal on Very Large Data Bases, Vol. 5, No. 1, January 1996.

[17] X.Song and J.W.S.Liu, “Maintaining Temporal Consistency:Pessimistic vs. Optimistic Concurrency Control,” IEEE Transactions on Knowledge and Data Engineering, Vol. 7, No. 5, pp. 786-796, October 1995.

[18] J. Stankovic and S. Son, An Architecture and Object Model for Distributed Object-Oriented Real-Time Databases, chosen as one of the outstanding papers from IEEE International Symposium on Object-Oriented Real-Time Distributed Computing (ISORC'98) , Kyoto, Japan, April 1998, also in IJCSSE.

[19] J. Stankovic, S. Son and J. Liebeherr, “BeeHive: Global Multimedia Database Support for Dependable, Real-Time Applications, invited paper,” Second Workshop on Active Real-Time Databases, Lake Como, Italy, September 1997.

[20] M. Xiong, R. Sivasankaran, J. Stankovic, K. Ramamritham and D. Towsley, “Scheduling Transactions with Temporal Constraints: Exploiting Data Semantics,” IEEE 16th Real-Time Systems Symposium, December 1996.

[21] M. Xiong, and K. Ramamritham, “Deriving Deadlines and Periods for Real-Time Update Transactions,” 20th IEEE Real-Time Systems Symposium, December 1999, pp. 32-43. `

22

Recommended publications