Testability of Object-Oriented Systems: a Metrics-based Approach Magiel Bruntink ii °c 2003 Magiel Bruntink Cover image `Uzumaki' °c 2003 Akiyoshi Kitaoka, used with permission. `U' means rabbits, `zu' means ¯gure, `maki' means rotation, and `Uzumaki' means spirals or swirls. Yellow represents the color of the moon in Japan and it is imagined (though not seriously) that rabbits live in the moon and make mochi (food made from rice). Akiyoshi Kitaoka describing the meaning of `Uzumaki' on his web site. The drawing appeared in his Japanese book Trick Eyes 2, and quickly became popular on the Internet. This work has been typeset using Leslie Lamport's LATEX, in Donald Knuth's Computer Modern Roman font, 11 pt. Testability of Object-Oriented Systems: a Metrics-based Approach Magiel Bruntink September 2003 Master's Thesis Universiteit van Amsterdam Faculteit der Natuurwetenschappen, Wiskunde en Informatica Sectie: Programmatuur Afstudeerdocent: Prof. dr. Paul Klint Begeleiders: Dr. Arie van Deursen & Dr. Tobias Kuipers ii Preface When I ¯rst saw the `Uzumaki' drawing, it struck me how well it captured my state of mind at the beginning of the work on this thesis. The topic I had chosen to explore appeared to be huge and very di±cult, and many things of interest were scattered throughout it. I was unsure where to focus my attention. One cannot help but feel the same way when looking at `Uzu- maki'; wherever one looks, there is always something going on in another part of the drawing. Hence I decided to put `Uzumaki' on the cover of my thesis. However, the analogy does not stop there. If one can resist the urge to keep looking at the moving wheels of `Uzumaki', and focus on only one, all the wheels will eventually stop moving. This thesis is the result of focusing on one particular aspect of the topic, testability. I thank my coaches, Tobias Kuipers and Arie van Deursen, for supporting me in resisting the urge to keep looking at the moving wheels. The topic of this thesis was conceived by Arie, who is never short of in- teresting ideas or new perspectives. Together with Tobias and Paul Klint, it was decided that the project would be performed at the Software Improve- ment Group. I was amazed how readily Tobias agreed to be my coach, and allowed me to work at the SIG. Arie, Paul and Tobias, thanks a lot for your feedback and support. Working at the SIG has been an excellent experience, for which I'd like to thank my colleagues. In particular, I am grateful to Alex van den Bergh, Gerard Kok, Gerjon de Vries, Joost Visser, Patrick Duin and Peter van Helden for allowing me a glimpse of the real world of software engineering. A great advantage of working at a company is the possibility to explore the industry. In my case, I visited a number of people of other companies and interviewed them about their testing e®orts. I'd like to thank Joris Alkema, Rutger Buckmann and Vincent van Moppes all of Epictoid, Ren¶e Clerc of Tryllian and Sylvia Verschueren of ABN-AMRO for their patience and time to answer my questions. Finally, I am grateful to my friends and family for their support and keeping me in touch with the world outside of research. In particular, Adam Booij was a great help with statistics and Steven de Rooij was brave enough to read early versions of this work. However, none of this would have been possible without the support of my parents, Hiljan and Roel, to whom I dedicate this thesis. iv Contents Contents v List of Figures vii List of Tables ix 1 Introduction 1 1.1 Problem Statement . 1 1.2 Software Testing . 1 1.3 Testability . 4 1.4 Assessing Testability Using Metrics . 6 1.5 Overview . 7 2 Testability 9 2.1 The Fish-bone In More Detail . 9 2.2 A Model Of Testability . 12 2.2.1 Test Case Generation Factors . 13 2.2.2 Test Case Construction Factors . 16 3 Related Work 19 3.1 Fault Sensitivity . 19 3.2 Information Loss . 21 3.3 Visibility . 24 3.4 Observability And Controllability . 28 3.5 Test-critical Dependencies . 31 3.6 Conclusion . 33 4 Metrics 35 4.1 Source-based Metrics . 35 4.1.1 Notation . 35 4.1.2 Depth Of Inheritance Tree (DIT) . 37 4.1.3 Fan Out (FOUT) . 37 4.1.4 Lack Of Cohesion Of Methods (LCOM) . 37 4.1.5 Lines Of Code Per Class (LOCC) . 38 vi CONTENTS 4.1.6 Number Of Children (NOC) . 38 4.1.7 Number Of Fields (NOF) . 38 4.1.8 Number Of Methods (NOM) . 38 4.1.9 Response For Class (RFC) . 38 4.1.10 Weighted Methods Per Class (WMC) . 38 4.2 Setting Up The Experiments . 38 4.3 Methods . 40 4.4 Implementation . 43 5 Case Studies 45 5.1 DocGen . 45 5.2 Apache Ant . 47 5.3 Results . 48 5.4 Discussion . 50 5.5 Conclusion . 57 6 Conclusions 59 6.1 Contributions . 59 6.2 Limitations . 61 6.3 Future Directions . 61 A Results Of The Case Studies 63 B Report On The Interviews 75 B.1 Introduction . 75 B.2 Results . 75 B.3 Discussion . 81 B.4 Conclusion . 82 Bibliography 83 List of Figures 1.1 The testability ¯sh-bone. 5 2.1 The sign method. 13 2.2 Control-flow graph of method sign. 14 2.3 Signature of the integerAdd method. 15 2.4 Possible implementation of the integerAdd method. 16 2.5 Alternate implementation of the sign method. 17 3.1 Example of a component speci¯cation. 21 3.2 Plot of the function (m ¡ 1)n=(mn ¡ 1). 23 3.3 Overview of the inputs and outputs of a method. 26 3.4 Example of a non-observable method. 29 3.5 Example of an observable method . 29 3.6 Example of a non-controllable method . 30 3.7 Example of a controllable method . 30 4.1 Methods overview. 40 A.1 DocGen: DIT . 64 A.2 DocGen: FOUT . 64 A.3 DocGen: LCOM . 65 A.4 DocGen: LOCC . 65 A.5 DocGen: NOC . 66 A.6 DocGen: NOF . 66 A.7 DocGen: NOM . 67 A.8 DocGen: RFC . 67 A.9 DocGen: WMC . 68 A.10 Ant: DIT . 68 A.11 Ant: FOUT . 69 A.12 Ant: LCOM . 69 A.13 Ant: LOCC . 70 A.14 Ant: NOC . 70 A.15 Ant: NOF . 71 A.16 Ant: NOM . 71 viii LIST OF FIGURES A.17 Ant: RFC . 72 A.18 Ant: WMC . 72 List of Tables 5.1 Measurement results for DocGen. 48 5.2 Measurement results for Ant. 49 5.3 Hotelling's t values for the source-based metrics versus LOCC. 51 A.1 Spearman correlations between the metrics. 73 x LIST OF TABLES Chapter 1 Introduction 1.1 Problem Statement In this thesis we investigate factors of the testability of object-oriented soft- ware systems. The starting point is given by a study of the literature to obtain an initial model of testability, and related software metrics. Subseq- uently, the metrics are evaluated by means of two case studies of large Java systems. The goal of this thesis is to de¯ne and evaluate a set of metrics that can be used to assess the testability of the classes of a Java system. 1.2 Software Testing Programmers are human beings. Human beings are prone to make errors during most of their activities, and software development is no exception. Thus the need arises for veri¯cation of the (half-) products of software de- velopment. Software testing is the practice of running a piece of software in order to verify that it works as expected. The errors made by programmers have the potential of introducing faults in the program. Typically faults are con¯ned to a single program statement, but more complex and distributed faults can occur too. Faults in a program have the capability of causing the program to fail. Failure happens when the program produces an output that is di®erent from the expected output. In short, programmers make errors and introduce faults in their programs, which become prone to failure. The terms we use here are de¯ned more thoroughly by the IEEE in [12]. Software testing occurs (or should occur!) during multiple phases of the construction of a software system. Typically the software development methodology determines both the kind of testing, and the phase(s) during which testing is done. Since methodology is not our focus here, we will su±ce by briefly describing the di®erent kinds of testing that are common in practice. It is useful to consider the several aspects of testing separately. The 2 Introduction following overview of software testing is based on the Software Engineering Body of Knowledge (SWEBOK) [2]. First, we look at the level at which testing can take place. ² Unit testing is concerned with verifying the behavior of the smallest isolated components of the system. Typically, this kind of testing is performed by developers or maintainers and involves using knowledge of the code itself. In practice, it is often hard to test components in isolation. Components often tend to rely on others to perform their function. ² Integration testing is focused at the veri¯cation of the interactions between the components of the system. The components are typically subjected to unit testing before integration testing starts. A strategy that determines the order in which components should be combined usually follows from the architecture of the system.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages97 Page
-
File Size-