Testing of Object-Oriented Programming Systems (OOPS): a Fault-Based Approach

Testing of Object-Oriented Programming Systems (OOPS): a Fault-Based Approach

Testing of Object-Oriented Programming Systems (OOPS): A Fault-Based Approach Jane Huffman Hayes Science Applications International Corporation 1213 Jefferson-Davis Highway, Suite 1300 Arlington, Virginia 22202 Abstract. The goal of this paper is to examine the testing of object-oriented systems and to compare and contrast it with the testing of conventional programming language systems, with emphasis on fault-based testing. Conventional system testing, object-oriented system testing, and the application of conventional testing methods to object-oriented software will be examined, followed by a look at the differences between testing of conventional (procedural) software and the testing of object- oriented software. An examination of software faults (defects) will follow, with emphasis on developing a preliminary taxonomy of faults specific to object-oriented systems. Test strategy adequacy will be briefly presented. As a result of these examinations, a set of candidate testing methods for object-oriented programming systems will be identified. 1 Introduction and Overview Two major forces are driving more and more people toward Object-Oriented Programming Systems (OOPS): the need to increase programmer productivity, and the need for higher reliability of the developed systems. It can be expected that sometime in the near future there will be reusable "trusted" object libraries that will require the highest level of integrity. All this points to the need for a means of assuring the quality of OOPS. Specifically, a means for performing verification and validation (V&V) on OOPS is needed. The goal of this paper is to examine the testing of object- oriented systems and to compare and contrast it with the testing of conventional programming language systems, with emphasis on fault-based testing. 1.1 Introduction to Object-Oriented Programming Systems (OOPS) Object-Oriented Programming Systems (OOPS) are characterized by several traits, with the most important one being that information is localized around objects (as opposed to functions or data) [3]. Meyer defines object-oriented design as "the construction of software systems as structured collections of abstract data type implementations" [15]. To understand OOPS, several high level concepts must be introduced: objects, classes, inheritance, polymorphism, and dynamic binding. Objects represent real-world entities and encapsulate both the data and the functions (i.e., behavior and state) which deal with the entity. Objects are run-time instances of classes. Classes define a set of possible objects. A class is meant to implement a user-defined type (ideally an Abstract Data Type (ADT) to support data abstraction). The goal is to keep the implementation details private to the class (information hiding). Inheritance refers to the concept of a new class being declared as an extension or restriction of a previously defined class [15]. Polymorphism refers to the ability to take more than one form. In object-oriented systems, it refers to a reference that can, over time, refer to instances of more than one class. The static type of the reference is determined from the declaration of the entity in the program text. The dynamic type of a polymorphic reference may change from instant to instant during the program execution. Dynamic binding refers to the fact that the code associated with a given procedure call is not known until the moment of the call at run-time [12]. Applying the principles of polymorphism and inheritance, it can be envisioned that a function call could be associated with a polymorphic reference. Thus the function call would need to know the dynamic type of the reference. This provides a tremendous advantage to programmers over conventional programming languages (often referred to as procedural languages): the ability to request an operation without explicitly selecting one of its variants (this choice occurs at run-time) [16]. 1.2 Introduction to Verification and Validation (V&V) Verification and validation refers to two different processes that are used to ensure that computer software reliably performs the functions that it was designed to fulfill. Though different definitions of the terms verification and validation exist, the following definitions from ANSI/IEEE Standard 729-1983 [10] are the most widely accepted and will be used in this paper: Verification is the process of determining whether or not the products of a given phase of the software development cycle fulfill the requirements established during the previous phase. Validation is the process of evaluating software at the end of the software development process to ensure compliance with software requirements. 1.3 Introduction to Testing Testing is a subset of V&V. The IEEE definition of testing is "the process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements or to identify differences between expected and actual results." [10]. Conventional software testing is divided into two categories: static testing and dynamic testing. Static testing analyzes a program without executing it while dynamic testing relies on execution of the program to analyze it [5]. The focus of this paper will be dynamic testing. Dynamic testing is often broken into two categories: black-box testing and white-box testing. Black-box testing refers to functional testing which does not take advantage of implementation details and which examines the program's functional properties. White-box testing, on the other hand, uses information about the program's internal control flow structure or data dependencies. 1.4 Research Approach Figure 1.4-1 depicts the research approach to be taken in this paper. The examination of conventional system testing, object-oriented system testing, and the application of conventional testing methods to object-oriented software will be the first area of concern (sections 2.1 and 2.2). The differences between testing of conventional (procedural) software and the testing of object- oriented software will be examined next (section 2.3). An examination of software faults (defects) will follow, with emphasis on developing a preliminary taxonomy of faults specific to object- oriented systems (section 3). Test strategy adequacy will briefly be presented (section 4). As a result of these examinations, a set of candidate testing methods for object-oriented programming systems will be identified (section 5). It is also possible that some object-oriented software faults which are not currently detected by any testing methods (conventional or object-oriented specific) will be uncovered. It should be noted that despite the future research that is needed, a preliminary recommendation on test methods can be made based on the similarities between OOPS and procedural language systems. 2 Testing Methods An examination of testing methods for conventional programming language systems follows as well as a look at the applicability of these testing methods to object-oriented programming systems. A discussion of testing methods specific to OOPS will then be presented. 2.1 Conventional Programming Language System Testing Methods 2.1.1 Testing Methods . Much has been written concerning the testing of conventional (or procedural) language systems. Some of the earlier works include The Art of Software Testing by Myers [22], "Functional Program Testing" by Howden [9], and more recently Software Testing Techniques by Boris Beizer [2]. The first reference [22] focused on explaining testing and leading the reader through realistic examples. It also discussed numerous testing methods and defined testing terminology. Howden's reference focused on functional testing, which is probably the most frequently applied testing method. Finally, Beizer's text provided a veritable encyclopedia of information on the many conventional testing techniques available and in use. For this paper, the test method taxonomy of Miller is used [19]. Testing is broken into several categories: general testing; special input testing; functional testing; realistic testing; stress testing; performance testing; execution testing; competency testing; active interface testing; structural testing; and error-introduction testing. General testing refers to generic and statistical methods for exercising the program. These methods include: unit/module testing [24]; system testing [7]; regression testing [23]; and ad-hoc testing [7]. Special input testing refers to methods for generating test cases to explore the domain of possible system inputs. Specific testing methods included in this category are: random testing [1]; and domain testing (which includes equivalence partitioning, boundary-value testing, and the category-partition method) [2]. Functional testing refers to methods for selecting test cases to assess the required functionality of a program. Testing methods in the functional testing category include: specific functional requirement testing [9]; and model-based testing [6]. Realistic test methods choose inputs/environments comparable to the intended installation situation. Specific methods include: field testing [28]; and scenario testing [24]. Stress testing refers to choosing inputs/environments which stress the design/implementation of the code. Testing methods in this category include: stability analysis [7]; robustness testing [18]; and limit/range testing [24]. Performance testing refers to measuring various performance aspects with realistic inputs. Specific methods

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    22 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us