Dependability and Performance Measures for the Database Practitioner

Dependability and Performance Measures for the Database Practitioner

Dependability and Performance Measures for the Database Practitioner Toby J. Teorey and Wee Teck Ng Abstract -- We estimate the availability, reliability, and mean transaction time (response time) for repairable database configurations, centralized or distributed, in which each service component is continuously available for repair. Reliability, the probability that the entire transaction can execute properly without failure, is computed as a function of mean time to failure (MTTF) and mean time to repair (MTTR). Mean transaction time in the system is a function of the mean service delay time for the transaction over all components, plus restart delays due to component failures, plus queuing delays for contention. These estimates are potentially applicable to more generalized distributed systems. Index terms -- Database performance estimation, response time, reliability, dependability, restart delays, queuing delays, mean time to failure. I. Introduction The increasing availability and importance of centralized, distributed, and multidatabases raises serious concerns about their dependability in a fragile network environment, much more than with centralized databases. Although the major impetus for distributed data is to increase data availability, it is not always clear whether the dependability of the many hardware and software components of a distributed system is such that the level of availability desired is actually provided. Performance of a database system is closely related to dependability, and it cannot be good if the dependability is low. Failures occur in many parts of a computer system: at the computer sites, the storage media (disk), communication media, and in the database transactions, [1], [3], [6], [11]. Site failures may be due to hardware (CPU, memory, power failure) or software system problems. Disk failures may occur from operating system software bugs, controller problems, or head crashes. In the network, there may be errors in messages, including lost messages and improperly ordered messages, and line failures. Transaction failures may be due to bad data, constraint failure, or deadlock [7]. Each of these types of failures contributes to the degradation of overall dependability of a system. A significant amount of research has been reported on the subject of dependability of computer systems, and a large number of analytical models exist to predict reliability for such systems [8], [13], [14]. While these models provide an excellent theoretical foundation for computing dependability, there is still a need to transform the theory to practice [2], [12], [16]. Our goal is to provide the initial step in such a transformation with a realistic set of system parameters, a simple analytical model, and comparison of the model predictions with a discrete event simulation tool. Based on the definitions in [8], [15] dependability of any system can be thought of as composed of three basic characteristics: availability, reliability and serviceability. The steady-state availability is the probability that a system will be operational at any random point of time, and is expressed as the expected fraction of time a system is operational during the period it is required to be operational. The reliability is the probability that a system will perform its intended function properly without failure and satisfy specified performance requirements during a given time interval [0,t] when used in the manner intended. The serviceability or maintainability is the probability of successfully performing and completing a corrective maintenance action within a prescribed period of time with the proper maintenance support. We look at the issues of availability and reliability in the context of simple database transactions (and their sub transactions) in a network environment where the steady-state availability is known for individual system components: computers, networks, the various network interconnection devices, and possibly their respective sub components. A transaction path is considered to be a sequential series of resource acquisitions and executions, with alternate parallel paths allowable. We assume that all individual system components, software and hardware, are repairable [8]. A non repairable distributed database is one in which transactions can be lost and the system is not available for repair. In a repairable distributed database all components are assumed to be continuously available for repair, and any aborted transaction is allowed to restart from its point of origin. We will only consider repairable databases here. 1 Serviceability is assumed to be deterministic in our model, but the model could be extended for probabilities less than 1 that the service will be successfully completed on time. II. Availability Availability can be derived in terms of the mean time to failure (MTTF) and the mean time to repair (MTTR) for each component used in a transaction. Note that from [15] we have the basic relationship for mean time between failures (MTBF): MTBF = MTTF + MTTR (1) The steady state availability of a single component i can be computed by Ai = MTTFi/(MTTFi + MTTRi) = MTTFi/MTBFi (2) Let us look at the computation of steady state availability in the network underlying the distributed or multidatabase. In Fig. 1a two sites, S1 and S2, are linked with the network link L12. Let AS1, AS2, and AL12 be the steady state availabilities for components S1, S2, and L12, respectively. Assuming that each system component is independent of all other components, the probability that path S1/L12/S2 is available at any randomly selected time t is the product of the individual availabilities in series: AS1/L12/S2 = AS1*AL12*AS2 (3) Extending the concept of availability to parallel paths as shown in Fig. 1b, we factor out the two components, S1 and S2, that are common to each path: AS1//S2 = AS1*AS2*[availability of the connecting paths between S1 and S2] (4) Eq. 2 states that the total path from site S1 to site S2 has three serial components: S1, S2, and the two possible connecting paths between the sites. We simply partition the whole path into three serial parts and apply Eq. 1 to them to determine the total path availability. Now we need to determine the actual value of the third component of availability, the connecting paths. This is determined by the well-known relationship for parallel independent events that states that the total availability of a parallel path is the sum of the serial availability of each of the two separate paths, minus the product of their serial availabilities. AS1//S2 = AS1*AS2*[AL12 + AL13*AS3*AL32 - AL12*AL13*AS3*AL32] (5) 2 Figure 1. Network paths for a distributed database We now have the basic relationships for serial and parallel paths for steady-state availability. We note that if query optimizers pick the shortest path without regard to availability, the system could reverse the decision if the selected path is not available. Because of the extreme complexity of computing reliability for parallel paths, we focus the remaining discussion on serial paths, only, to illustrate the basic concepts of combining reliability and performance into a single measure. III. Reliability An estimate of availability is limited to a single point in time. We now need to estimate the reliability for an entire transaction (including queries and/or updates), and in Sec. 4 the mean transaction completion time for a repairable distributed or multidatabase that has automatic restarts. Reliability is the probability that the entire transaction can execute properly (over a given time interval) without failure, and we need to compute the estimated mean reliability over a time duration [0,t], where t is the mean delay experienced over the system during the transaction execution. For tractability we first assume that the number of failures of each system component is exponentially distributed: k -mt Pi(k,t) = (mt) * e /k! (6) This is the probability that there are exactly k failures of component i in time interval t, where m is the mean number of failures per unit time. The probability that there are no failures of component i in time interval t is: -mt Pi(0, t) = e (7) Let MTTFi = mean time to failure on component i, MTTRi = mean time to repair for component i, and D = mean service delay time for the transaction. We can now compute the reliability of the transaction at component i; for example the joint probability that the transaction has no failures while actively using component i, which is equal to the conditional probability that the component i is reliable over the interval (0, D) times the probability that the component i is available at the beginning of the same interval. That is, -m *D Pi(0, D) = e i * Ai (8) 3 where mi = failure rate = expected number of failures per unit time = 1/MTTFi. The reliability of the entire transaction is the product of the (assumed independent) reliabilities of the transaction for each component. A. Example: Database transaction reliability Let us now apply the relationship on Eq. 8 to the serial path database configuration over the network in Fig. 1a. The transaction reliability is equal to the probability that the transaction can be completed without failure from initiation at site S1, local access to the data in site S2, and returning with the result to site S1. We assume that the transaction is successful only if all components are active the entire time required to service the transaction. We are given the mean delay experienced by each sub transaction on each component resource,

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us