Software Testing in the Textbook
Total Page:16
File Type:pdf, Size:1020Kb
Software Testing in the textbook Software Testing • Introduction (Verification and Validation) • 8.1 Development testing • 8.2 Test-driven development 8.3 Release testing Chapter 8 • • 8.4 User testing 1 2 Verification and Validation Verification and Validation: Goals • Verification: • Establish confidence that the software is fit for - The software should conform to its specification purpose. (functional and non-functional requirements). • NOT that it’s completely free of defects. "Are we building the product right”. - generally not an achievable goal • Validation: • Good enough for its intended use - The software should do what the customer really - the type of use will determine the degree of confidence requires. that is needed "Are we building the right product”. Requirements don’t always reflect real wishes and needs of customers and users 3 4 Verification and Validation: Verification Techniques confidence level • Software inspections (static verification) Required confidence level depends on: - Analyze static system representation to discover problems • Software purpose - Specifications, design models, source code, test plans - how critical is the software to an organization? • Software testing (dynamic verification) • User expectations - The system is executed with simulated test data - Users may have low expectations of certain kinds of - Check results for errors, anomalies, data regarding non- software. functional requirements. • Marketing environment - Getting a product to market early may be more important than finding all the defects in the program. 5 6 Verification Techniques Software Testing Goals • Demonstrate that the software meets its Inspections requirements. - Validation testing - Collect test cases that show that the system operates as intended. Requirements Software UML design Database Program specification architecture models schemas • Discover situations in which the behavior of the software is incorrect or undesirable. System - Defect testing prototype Testing - Create test cases that make the system perform incorrectly. V&V at different stages in the software process - Then fix the defect: Debugging 7 8 Software Testing Software testing process Input test data Ie Inputs causing anomalous behaviour Test Test Test Test cases data results reports System Design test Prepare test Run program Compare results cases data with test data to test cases Can be automated Test Case specifies: Outputs which reveal Output test results • what to test Oe the presence of • input data defects • expected output Validation testing (white only) versus Defect testing (blue) 9 10 Three stages of software testing 8.1 Development testing • 8.1 Development testing: • All testing activities carried out by the team - the system is tested during development to discover developing the system. (defect testing) bugs and defects. - Unit testing: individual program units or object classes are tested. • 8.3 Release testing: --should focus on testing the functionality of objects. - a separate testing team tests a complete version of the - Component testing: several individual units are system before it is released to users. integrated to create composite components. 8.4 User testing: --should focus on testing component interfaces. • - System testing: some or all of the components in a - users or potential users of a system test the system in system are integrated and the system is tested as a their own environment. whole. --should focus on testing component interactions. 11 12 8.1.1 Unit testing Writing test cases for WeatherStation Unit testing is the process of testing individual • 1. Write a test case to check if components in isolation. identifier is set properly during normal WeatherStation - functions vs classes system startup identifier • Goal: complete test coverage of a class: reportWeather ( ) reportStatus ( ) - Testing all operations associated with an object 2. Write test cases for each of the powerSave (instruments) - Setting and interrogating all object attributes methods (reportWeather(), etc.) remoteControl (commands) - Exercising the object in all possible states Try to test in isolation, but for example you cannot test shutdown unless you reconfigure (commands) have already executed restart. restart (instruments) shutdown (instruments) 13 14 Testing the states of an object Testing States Controlled • Use the state model: - Identify sequences of state transitions to be tested Operation shutdown() remoteControl() - Write a test case that generates the event sequences to cause these transitions. reportStatus() restart() Testing Shutdown Running - Verify that the program ends up in the proper state. transmission done test complete configuration done reconfigure() For example (for WeatherStation): Transmitting • powerSave() clock collection - Shutdown -> Running-> Shutdown done reportWeather() Configuring weather summary complete - Configuring-> Running-> Testing -> Transmitting -> Summarizing Running Collecting - Running-> Collecting-> Running-> Summarizing -> Transmitting -> Running Shutdown -> Running-> Shutdown is tested with a call to restart() followed by a call to shutdown(), then check the state 15 16 Sample JUnit style test case Automated unit testing (from ch. 3) • Unit testing should be automated so that tests are run and checked without manual public class MoneyTest extends TestCase { intervention. public void testSimpleAdd() { JUnit: a unit testing framework for the Java Money m1 = new Money(12,”usd”); • Money m2 = new Money (14, “usd”); programming language. Money expected = new Money(26, “usd”); - Your test classes extend the JUnit test class: Money result = m1.add(m2); assertEquals (expected, result); - Initialize the test case with the inputs and expected } outputs. } - Call the method to be tested. - Make the assertion: compare the result of the call with the expected result. - When executed, if the assertion evaluates to true, the test has been successful if false, then it has failed. 17 18 8.1.2 Choosing unit test cases Two strategies for choosing unit test cases Two types of unit test cases: • Partition testing: identify groups of inputs that have common characteristics and should be • those that show the component behaves processed in the same way. correctly under normal use - choose tests from within each of these groups. - For example, demonstrate (part of) a use case Guideline-based testing: use testing guidelines those that expose the defects/bugs • • based on the kinds of errors that programmers - For example, invalid input should generate error message and fail gracefully often make. - choose tests based on these guidelines. 19 20 Partition testing Equivalence partitions example 3 11 • Divide the input data of a software unit into 4710 partitions of data - program should behave similarly for all data in a given Less than 4Between 4 and 10 More than 10 partition Number of input values • Determine partitions from specifications 9999 100000 10000 50000 99999 • Design test cases to cover each partition at least once. Less than 10000Between 10000 and 99999 More than 99999 • If the partition is a range, also test boundary Input values values. Specification states the program accepts 4 to 10 inputs • Enables good test coverage with fewer test cases. which are five-digit integers (greater than 10,000) 21 22 Guideline-based testing 8.1.3 Component testing • Choose test cases based on previous experience of common programming errors • Software components are made up of several interacting objects (or sub-components) • For example: - Choose inputs that force the system to generate all error • The functionality of these objects is accessed messages through the defined component interface. - Repeat the same input or series of inputs numerous times • Component testing is demonstrating that the - Try to force invalid outputs to be generated component interface behaves according to its - Force computation results to be too large or too small. specification. - Test sequences/lists using - Assuming the subcomponents (objects) have already ✦ one element been unit-tested ✦ zero elements ✦ different sizes in different tests 23 24 Interface (component) testing 8.1.4 System testing Test cases • System testing: integrating components to create a version of the system and then testing the integrated system. • Checks that: - components are compatible, A B - components interact correctly - components transfer the right data at the right time across their interfaces. C • Tests the interactions between components. Small empty boxes represent the interface 25 26 Use case-based testing reportWeather sequence diagram Weather • Use cases developed during requirements information system engineering to identify system interactions are a SatComms WeatherStation Commslink WeatherData good basis for system testing. request (report) - typically each use case is implemented by several acknowledge components in the system. reportWeather () acknowledge get (summary) summarise () If you developed sequence diagrams for the use send (report) • acknowledge cases, you can see the components and reply (report) interactions that are being tested. acknowledge Use the diagram to identify operations that will be tested, and to help design test cases to execute the tests. 27 28 System testing policies 8.3 Release testing • Exhaustive system testing is impossible • Release Testing is testing a particular release of a system that is intended for use outside of the Testing policies define the required subset of all • development team. possible