<<

Topic # 10

Software Testing: Strategies

(Ch. 17)

1

Objectives

1. Testing Strategy

2.

3.

4. Validation Testing

5. Testing

2 Strategy

„ The goal of software testing is to uncover errors.

„ To achieve this goal objective, the following strategic approach should be used: a series of 4 types (levels) of software tests – 1) unit tests, 2) integration tests, 3) validation tests, and 4) system tests. – should be planned and executed.

„ Software testing accounts for the largest percentage (up to 25-30%) of technical effort in the software process.

„ Testing -- a systematic, planned activity

„ -- an art (luck, intuition, etc). 3

Verification and Validation

Verification -- process-related term corresponds to “Are we building the product right?” (Is process correct?)

Validation - product-related term corresponds to “Are we building the right product?” (Is software product correct: Does it meet all customer ?)

Ex: somebody used software engineering theory, software procedures, graphic user interface, testing procedures, BUT the final software product is useless for a customer (or, maybe it is not popular among vast majority of customers, users).

In this case: - verification – YES, - validation – NO. 4 Components of Software Testing Strategy

1. Unit tests 2. Unit integration tests

concentrate on functional verification of a component concentrate on functional and incorporation verification of a component and incorporation of components into a program structure.

4. System tests 3. Validation tests

demonstrates traceability validates software once it has been validates software once it has been to Incorporated into a larger system (actual environment)

5

Question: Why do we need so many types of tests? Answers: Due to various types of bugs of . Due to various consequences of SW bugs.

Bug Categories: 1) variables-related bugs, 2) input/output data-related bugs, 3) coding (syntax)-related bugs, 4) system-related bugs, 5) functionality (functions)-related bugs, 6) design-related bugs, 7) standards violation-related bugs, 8) documentation-related bugs, etc.

Consequences (Types) of SW Bugs infectious In 2002, a study damage commissioned by the US Department of Commerce' catastrophic National Institute of Standards and Technology concluded extreme that software bugs, or errors, serious are so prevalent and so harmful that they cost the US disturbing economy an estimated $59 billion annually, or about annoying 0.6 percent of the gross mild domestic product. Bug Type 6 What Testing Shows (Outcomes)

syntax (coding) errors, logic errors, input date errors, etc.

requirements conformance

performance

quality of final product

7

1. Unit Testing

„ Testing begins at the unit (module) level and works “outward” toward the integration of the entire SW system

„ Testing is conducted by the developer in the developer’s environment (alpha-testing) and by independent testing group outside development environment (beta-testing).

module to be tested results

software test cases

Unit testing focuses verification effort on the smallest unit. Parallel testing on multiple modules is possible. 8 Unit Testing (cont.)

module to be tested We should test:

• scope of all declared variables (local, global) • data types of variables (integer, real, string, char, etc.) • boundary conditions (what will happen with variables when we leave this unit or module) • all functional tests (to test all required functions inside this unit) • independent paths inside this unit or module • all interfaces inside unit/module • identify and test all error-handling paths

test cases

9

2. Integration Testing Strategies

The goal of integration testing is to ensure that all interacting software units/modules/subsystems in a system interface correctly with one another to produce the desired results.

Furthermore, in trying to attain this goal, integration tests will ensure that the introduction of one or more subsystems into the system does not have an adverse affect on existing functionality.

An integration test covers the testing of interface points between subsystems. Integration testing is performed once unit testing has been completed for all units contained in the subsystems being tested.

2 main approaches: • top-down integration (the “big bang” approach) • bottom-up integration (an incremental construction strategy)

10 Top Down Integration (cont.)

A top module is tested with stubs

B F G

stubs are replaced one at a time, "depth first" as new modules are integrated, some subset of tests is re-run E

11

Bottom-Up Integration (cont.)

A

B F G

drivers are replaced one at a time, "depth first" C

working modules are grouped into building blocks and integrated D E

cluster

12 3, 4, … and other High Order Testing

other specialized testing

„ Validation testing: Focus is on software requirements 4. system test „ : Focus is on „ Alpha/Beta testing: Focus is on customer usage „ Recovery testing: forces the software to fail in a variety of ways and verifies that recovery is properly performed „ : verifies that protection mechanisms built into 3. validation test a system will, in fact, protect it from improper penetration „ : executes a system in a manner that demands resources in abnormal quantity, frequency, or volume „ Performance Testing: test the run-time performance of software within the context of an integrated system 13

Software Testing: Tools by IBM Company (examples)

14 Topic # 10

Software Testing: Strategies

Ch. 17: Additional Information

15

Software Testing: Additional Information

1. Software Testing Standards and Procedures http://it.toolbox.com/blogs/enterprise-solutions/sample-software-testing-standards-and- procedures-12772

2. Unit Testing http://mauriziostorani.wordpress.com/2008/07/09/unit-testing-examples-concepts-and- frameworks/

3. Integration Testing http://www.exforsys.com/tutorials/testing/integration-testing-whywhathow.html

16 Topic # 10:

Software Testing: SW Testing Techniques

(Ch. 18)

17

Objectives

1. Software Testing Principles

2. Black-Box Testing technique

3. White-Box Testing technique

18 SW Testing Principles

1. All tests should traceable to customer requirements. *) most severe problems are those that cause program to fail to meet customer requirements

2. Tests should be planned long before actual testing process begins. *) think about testing as soon as customer requirements are completed

3. The Pareto principle applies to SW testing. *) 80% of all errors uncovered during testing will likely be traceable to 20 percent of all program components OR 20% of components generate 80% of errors

4. Testing should begin “in the small” (modules) and progress toward testing “in the large” (subsystems and ). *) think about different testing procedures and content (tests) for each SW level -- system, subsystem, unit, component as soon as requirements are completed

5. Exhaustive testing is NOT possible. *) number of path permutations even in a small program is exceptionally large Æ if required reliability is greater than 0.9…

6. To be most effective, testing should be conducted by both a developer (alpha-testing) and an independent tester (beta-testing).

19

Who Tests the Software?

SW developer • understands the system • is driven by "delivery“ • will test "gently"

Alpha-Testing

Independent tester • must learn about the system, • is driven by quality • will attempt to test as much Beta-Testing as possible; even break software system

20 Software Testing Techniques

white-box black-box technique technique

Techniques

Strategies

21

White Box Testing Technique

The goal of white-box testing (inside- module) is to

1) exercise all program logic paths within a module at least once,

2) check all loop execution constraints on both TRUE and FALSE (YES/NO) sides, and

3) check all internal boundaries to ensure their validity.

It focuses on the program control structure within a single module.

... our goal is to ensure that all It is a very time- and effort- statements and conditions have been executed at least once ... consuming testing method.

22 White-Box Testing Mechanics

Unit/Module Environment Input Data Output Data

Code

Example: Currency (EURO to DOLLAR) converter unit/module (1-999 Euros -- 1.4; 1000-9999 – 1.45; >10000 – 1.45) Input: Amount (in EUROs) Output: Amount in U.S. DOLARS (with a rate based on amount of EURos) 23

Example

24 Exhaustive Testing vs Selective Testing

Selective Testing: applications of the the Pareto principle : 80% of all errors uncovered during testing will likely be traceable to 20 percent of all components OR 20% of components generate 80% of errors

Knowledge ? Experience ! Intuition! 25

Topic # 10

Software Testing: SW Testing Techniques

In-Classroom Exercise # 1

26 Black-Box Testing Technique

Black-box testing focuses on the requirements functional requirements of the software without regard to the internal workings of a module or program.

It is not an alternative to white- outputs box testing; it is a complementary approach.

Both white-box and black-box techniques uncover different inputs events classes (types) of errors.

27

Black-Box Testing

Environment Input Data (of specified data Output Data (of specified data types and structures) types and structures)

Unit/module Code(with code)

Events

Example: Webster system Input: Alpha-numeric UserName and Password Output: a) BUID number (integer data type), b) CORRECT (Yes, Enter)/INCORRECT (Boolean data type) 28 Topic # 10

Software Testing: SW Testing Techniques

In-Classroom Exercise # 2

29

Software Testing Techniques: Examples in Industry

http://research.microsoft.com/en-us/projects/pex/ http://www.ibm.com/developerworks/rational/library/1147.html

30 Topic # 10

Software Testing: SW Testing Techniques

In-Classroom Exercise # 3

31

Topic # 10

Software Testing: SW Testing Techniques

Homework assignment

32 Topic # 10

Software Testing: SW Testing Techniques

Additional Information

33

Characteristics of SW Testability

1. Operability it operates correctly (no bugs) 2. Observability the results of each are readily observed (what you see is what you test) 3. Controlability the degree to which testing can be automated and optimized 4. Decomposability testing can be targeted (inside independent modules)

5. Simplicity reduce complex architecture and logic to simplify tests

6. Stability few changes are requested during testing

7. Understandability of the design (more information we have, the smarter we will test)

*) There are still PH.D. dissertations on how to automatically generate tests for SW systems

34 Other Techniques

1. error guessing methods 2. decision table techniques 3. cause effect graphing

35

Boundary Value Analysis (BVA)

output user mouse formats data queries picks prompts

output input domain domain

A great number of errors tends to occur at the boundaries of the input domain rather then in the “center”.

36 Equivalence Partitioning

prompts mouse picks

user queries queries output formats data

This black-box technique divides the input domain of a program into classes of data from which test cases can be derived. 37

Sample Equivalence Classes

Valid data user supplied commands responses to system prompts file names computational data physical parameters bounding values initiation values output data formatting responses to error messages graphical data (e.g., mouse picks) Invalid data data outside bounds of the program physically impossible data proper value supplied in wrong place

38 Software Testing

Testing is the process of exercising a code or program with the specific intent of finding errors prior to delivery to the end user.

Testing is the one step in the software process that could be viewed (psychologically, at least) as destructive rather than constructive.

39