W12 Concurrent Session Wednesday 05/07/2008 3:00 PM

Systematic Test Design … All on One Page

Presented by:

Peter Zimmerer Siemens AG

Presented at: STAREAST Analysis & Review May 5-9, 2008; Orlando, FL, USA

330 Corporate Way, Suite 300, Orange Park, FL 32043 888-268-8770  904-278-0524  [email protected]  www.sqe.com

Peter Zimmerer

Peter Zimmerer is a Principal Engineer at Siemens AG, Corporate Technology, in Munich, Germany. He studied Computer Science at the University of Stuttgart, Germany and received his M.Sc. degree (Diplominformatiker) in 1991. He is an ISTQBTM Certified Tester Full Advanced Level.

For more than 15 years he has been working in the field of software testing and quality engineering for object-oriented (C++, Java), distributed, component-based, and embedded software. He was also involved in the design and development of different Siemens in-house testing tools for component and .

At Siemens he performs consulting on testing strategies, testing methods, testing processes, test automation, and testing tools in real-world projects and is responsible for the research activities in this area. He is co-author of several journal and conference contributions and speaker at international conferences, e.g. at Conference on Quality Engineering in Software Technology (CONQUEST), SIGS-DATACOM OOP, Conference on Software Engineering (SE), GI-TAV, STEV Austria, SQS Software & Systems Quality Conferences (ICS Test), SQS Conference on Software QA and Testing on Embedded Systems (QA&Test), Dr. Dobb’s Software Development Best Practices, Dr. Dobb’s Software Development West, Conference on Testing Computer Software, Quality Week, Conference of the Association for Software Testing (CAST), Pacific Northwest Software Quality Conference (PNSQC), PSQT/PSTT, QAI’s Software Testing Conference, EuroSTAR, and STARWEST.

He can be contacted at [email protected]. Internet: http://www.siemens.com/research-and-development/ http://www.siemens.com/corporate-technology/

1 Corporate Technology

Systematic Test Design . . . All on One Page STAREAST 2008 Orlando, FL, USA

Peter Zimmerer Principal Engineer Siemens AG, CT SE 1 Corporate Technology Corporate Research and Technologies Software & Engineering, Development Techniques D-81739 Munich, Germany [email protected] http://www.siemens.com/research-and-development/ http://www.siemens.com/corporate-technology/ Copyright © Siemens AG 2008. All rights reserved. Contents

Introduction ƒ Test design methods Here: methods, paradigms, techniques, styles, and ideas to create, derive, select, generate a ƒ Examples and references

Problem statement

Poster Test Design Methods on One Page

Guidelines and experiences

Summary

Page 2 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology What is a Test Case?

A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. ISTQB 2007, IEEE 610

A Test Case should include ƒ unique identification – who am I? ƒ test goal, test purpose – why? ƒ test conditions – what? ƒ preconditions – system state, environmental conditions ƒ test data – inputs, data, actions ƒ execution conditions – constraints, dependencies ƒ expected results – oracles, arbiters, verdicts, traces ƒ postconditions – system state, traces, environmental conditions, expected side effects, expected invariants

Page 3 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Introduction – Test design methods

Good test design, i.e. high-quality test cases, is very important There are many different test design methods and techniques ƒ Static, dynamic ƒ Black-box, white-box, grey-box ƒ Based on fault model, experience, exploratory ƒ Statistical (user profiles), random (monkey) The tester‘s challenge is to adequately combine these methods dependent on the given problem, domain, and requirements ƒ This is art as well! Black-box test design methods are often based on models – model-based testing Page 4 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Some systematic methods for test design

Black-box (models, interfaces, data) Selection, usage and ƒ Requirements-based (traceability matrix) applicability depends on the ƒ Use case-based testing, scenario testing ƒ specific domain (domain ƒ Design by contract knowledge is required!) ƒ Equivalence class partitioning ƒ used software technology ƒ Classification-tree method ƒ test requirements: required ƒ Boundary value analysis test intensity, quality criteria, ƒ State-based testing risks ƒ Cause-effect graphing ƒ existing test basis: ƒ Decision tables, decision trees specifications, documents, ƒ Combinatorial testing (n-wise) models White-box (internal structure, paths) ƒ project factors: constraints ƒ Control flow testing and opportunities ƒ Data flow testing Page 5 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Example – There are always too many test cases ...

Page 6 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Examples – Demo

ƒ Microsoft PowerPoint

ƒ Microsoft Word 2002

Page 7 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Test effectiveness and formal (systematic) test design

There are studies showing advantages of systematic test design. There are also studies showing advantages of random testing. Æ But do you really want to design your test cases only randomly?

Formal test design was almost twice as effective in defect detection per test case as compared to expert (exploratory) type testing, and much more effective compared to checklist type testing. Bob Bartlett, SQS UK, 2006

Page 8 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Some references

Many testing books cover test design to some extent

ƒ Boris Beizer: Software Testing Techniques ƒ Lee Copeland: A Practitioner's Guide to Software Test Design ƒ Rick D. Craig, Stefan P. Jaskiel: Systematic Software Testing ƒ Tim Koomen et. al.: TMap Next: For Result-driven Testing ƒ Glenford J. Meyers: The Art of Software Testing ƒ Torbjörn Ryber: Essential Software Test Design ƒ James Whittaker: How to Break Software ƒ James Whittaker, Herbert Thompson: How to Break Software Security ƒ Standard for Software Component Testing by the British Computer Society Specialist Interest Group in Software Testing (BCS SIGIST) (see http://www.testingstandards.co.uk/)

There are many different training offerings by different providers Page 9 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Problem statement

Starting from a risk-based testing strategy an adequate test design is the key for effective and efficient testing. Automation of bad test cases is a waste of time and money!

There are many different test design methods around for a long time (perhaps too many?) and a lot of books explain them in detail. There are different ƒ categorizations, classifications, and dimensions ƒ naming, interpretations, and understandings of test design methods which does not simplify their usage …

When we look into practice we can see that often there is quite limited usage of these test design methods at all.

What are the reasons behind that? How can we overcome this and improve our testing approaches? Page 10 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Possible reasons and consequences (1)

Possible reasons ƒ Engineers do not have / spend time to read a book on testing ƒ Missing the big picture ƒ Information is not available at a glance ƒ Which test design methods are there at all? ƒ Which test design method should I use in which context?

Consequences ƒ A specific test design method is not “available” when needed ƒ A specific test design method is too detailed or too complicated to be used in practice ƒ Often the focus is on depth and on perfectionism Æ Remark: But that can be especially required, e.g. in a safety critical environment! Page 11 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Possible reasons and consequences (2)

Test design tools are typically focused and implement only a few test design methods, e.g.: ƒ ATD – Automated Test Designer (AtYourSide Consulting, http://www.atyoursideconsulting.com/): Cause-effect graphing ƒ BenderRBT (Richard Bender, http://www.benderrbt.com/): Cause-effect graphing, quick design (orthogonal pairs) ƒ CaseMaker (Diaz & Hilterscheid, http://www.casemaker.de/): Business rules, equivalence classes, boundaries, error guessing, pairwise combinations, and element dependencies ƒ CTE (Razorcat, http://www.ats-software.de/): Classification tree editor ƒ Reactis (Reactive Systems, http://www.reactive-systems.com/): Generation of test data from, and validation of, Simulink and Stateflow models ƒ TestBench (Imbus, http://www.testbench.info/, http://www.imbus.de/): Equivalence classes, work-flow / use-case-based testing Page 12 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Poster Test Design Methods on One Page (1)

Idea: Systematic, structured, and categorized overview about different test design methods on one page

Focus more on using an adequate set of test design methods than on using only one single test design method in depth / perfection

Focus more on concrete usage of test design methods than on defining a few perfect test design methods in detail which are not used then in the project

Focus more on breadth instead on depth ƒ Do not miss breadth because of too much depth

Do not miss the exploratory, investigative art of testing Page 13 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology T e s t D e s i g n M e t h o d s o n O n e P a g e

Black-box Standards (e.g. ISO/IEC 9126, IEC 61508), norms, (formal) specifications, claims 3 (models, interfaces, data) Requirements-based with traceability matrix (requirements x test cases) 3 Use case-based testing (sequence diagrams, activity diagrams) 3 CRUD (Create, Read, Update, Delete) (data cycles, database operations) 3 Poster Test Design Methods Flow testing, scenario testing, soap opera testing 4 User / Operational profiles: frequency and priority / criticality (Software Reliability Engineering) 4 Statistical testing (markov chains) 4 on One Page (2) Random (monkey testing) 4 Features, functions, interfaces 1 Design by contract (built-in self test) 3 Equivalence class partitioning 2 Domain partitioning, category-partition method 4 Classification-tree method 3 Boundary value analysis 2 Special values 1 Test catalog / matrix for input values, input fields 5 State-based testing (Final State Machines) 3 Cause-effect graphing 5 Decision tables, decision trees 5 Syntax testing (grammar-based testing) 4 Combinatorial testing (pair-wise, orthogonal / covering arrays, n-wise) 3 Time cycles (frequency, recurring events, test dates) 4 Evolutionary testing 5

Grey-box Dependencies / Relations between classes, objects, methods, functions 2 Dependencies / Relations between components, services, applications, systems 3 Communication behavior (dependency analysis) 3 Trace-based testing (passive testing) 3 Protocol based (sequence diagrams, message sequence charts) 4

White-box Control flow-based Coverage Statements (C0), nodes 2 (internal structure, paths) (specification-based, Branches (C1), transitions, links, paths 3 model-based, Conditions, decisions (C2, C3) 4 code-based) Elementary comparison (MC/DC) 5 Interfaces (S1, S2) 4 Static metrics Cyclomatic complexity (McCabe) 4 Metrics (e.g. Halstead) 4 Data flow-based Read / Write access 3 Def / Use criteria 5

Positive, valid cases Normal, expected behavior 1 Negative, invalid cases Invalid, unexpected behavior 3 Error handling 3 Exceptions 5

Fault-based Risk-based 2 Systematic failure analysis (Failure Mode and Effect Analysis, Fault Tree Analysis) 4 Attack patterns (e.g. by James A. Whittaker) 3 Error catalogs, bug taxonomies (e.g. by Boris Beizer, Cem Kaner) 4 Bug patterns: standard, well-known bug patterns or produced by a root cause analysis 3 Bug reports 2 Fault model dependent on used technology and nature of system under test 2 Test patterns (e.g. by Robert Binder), Questioning patterns (Q-patterns by Vipul Kocher) 3 Ad hoc, intuitive, based on experience, check lists 1 Error guessing 2 , heuristics, mnemonics (e.g. by James Bach, Michael Bolton) 2 Fault injection 4 5

Key Regression (selective retesting) Retest all 5 Categorization Retest by risk, priority, severity, criticality 2 Retest by profile, frequency of usage, parts which are often used 3 Methods, Paradigms, Techniques, Styles, and Ideas to Create a Test Case Retest changed parts 2 Effort / Difficulty / Resulting Test Intensity (5 Levels) Retest parts that are influenced by the changes (impact analysis, dependency analysis) 5 Page 14 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology T e s t D e s i g n M e t h o d s o n O n e P a g e

Black-box Standards (e.g. ISO/IEC 9126, IEC 61508), norms, (formal) specifications, claims 3 (models, interfaces, data) Requirements-based with traceability matrix (requirements x test cases) 3 Use case-based testing (sequence diagrams, activity diagrams) 3 CRUD (Create, Read, Update, Delete) (data cycles, database operations) 3 Flow testing, scenario testing, soap opera testing 4 User / Operational profiles: frequency and priority / criticality (Software Reliability Engineering) 4 Statistical testing (markov chains) 4 Random (monkey testing) 4 Features, functions, interfaces 1 Design by contract (built-in self test) 3 Equivalence class partitioning 2 Domain partitioning, category-partition method 4 Classification-tree method 3 Boundary value analysis 2 Special values 1 Test catalog / matrix for input values, input fields 5 State-based testing (Final State Machines) 3 Cause-effect graphing 5 Decision tables, decision trees 5 Syntax testing (grammar-based testing) 4 Combinatorial testing (pair-wise, orthogonal / covering arrays, n-wise) 3 Time cycles (frequency, recurring events, test dates) 4 Evolutionary testing 5

Grey-box Dependencies / Relations between classes, objects, methods, functions 2 Dependencies / Relations between components, services, applications, systems 3 Communication behavior (dependency analysis) 3 Trace-based testing (passive testing) 3 Protocol based (sequence diagrams, message sequence charts) 4

White-box Control flow-based Coverage Statements (C0), nodes 2 (internal structure, paths) (specification-based, Branches (C1), transitions, links, paths 3 model-based, Conditions, decisions (C2, C3) 4 code-based) Elementary comparison (MC/DC) 5 Interfaces (S1, S2) 4 Static metrics Cyclomatic complexity (McCabe) 4 Metrics (e.g. Halstead) 4 Data flow-based Read / Write access 3 Poster Test Design Methods on One Page (3) Def / Use criteria 5 Positive, valid cases Normal, expected behavior 1 Negative, invalid cases Invalid, unexpected behavior 3 Error handling 3 Exceptions 5

Fault-based Risk-based 2 Systematic failure analysis (Failure Mode and Effect Analysis, Fault Tree Analysis) 4 Attack patterns (e.g. by James A. Whittaker) 3 Error catalogs, bug taxonomies (e.g. by Boris Beizer, Cem Kaner) 4 Bug patterns: standard, well-known bug patterns or produced by a root cause analysis 3 Bug reports 2 Fault model dependent on used technology and nature of system under test 2 Test patterns (e.g. by Robert Binder), Questioning patterns (Q-patterns by Vipul Kocher) 3 Ad hoc, intuitive, based on experience, check lists 1 Error guessing 2 Exploratory testing, heuristics, mnemonics (e.g. by James Bach, Michael Bolton) 2 Fault injection 4 Mutation testing 5

Regression (selective retesting) Retest all 5 Retest by risk, priority, severity, criticality 2 Retest by profile, frequency of usage, parts which are often used 3 Retest changed parts 2 Retest parts that are influenced by the changes (impact analysis, dependency analysis) 5 Categories of test design methods are orthogonal and independent in some way but should be combined appropriately. The selection of the used test design methods depends on many factors, for example: ƒ Requirements of the system under test and the required quality ƒ Requirements for the tests – quality of the tests, i.e. the required intensity and depth of the tests ƒ Testing strategy: effort / quality of the tests, distribution of the testing in the development process ƒ Existing test basis: specifications, documents, models ƒ Problem to be tested (domain) or rather the underlying question (use case) ƒ System under test or component under test ƒ Test level / Test step ƒ Used technologies (software, hardware) ƒ Suitable tool support: for some methods absolutely required Page 15 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology T e s t D e s i g n M e t h o d s o n O n e P a g e

Black-box Standards (e.g. ISO/IEC 9126, IEC 61508), norms, (formal) specifications, claims 3 (models, interfaces, data) Requirements-based with traceability matrix (requirements x test cases) 3 Use case-based testing (sequence diagrams, activity diagrams) 3 CRUD (Create, Read, Update, Delete) (data cycles, database operations) 3 Flow testing, scenario testing, soap opera testing 4 User / Operational profiles: frequency and priority / criticality (Software Reliability Engineering) 4 Statistical testing (markov chains) 4 Random (monkey testing) 4 Features, functions, interfaces 1 Design by contract (built-in self test) 3 Equivalence class partitioning 2 Domain partitioning, category-partition method 4 Classification-tree method 3 Boundary value analysis 2 Special values 1 Test catalog / matrix for input values, input fields 5 State-based testing (Final State Machines) 3 Cause-effect graphing 5 Decision tables, decision trees 5 Syntax testing (grammar-based testing) 4 Combinatorial testing (pair-wise, orthogonal / covering arrays, n-wise) 3 Time cycles (frequency, recurring events, test dates) 4 Evolutionary testing 5

Grey-box Dependencies / Relations between classes, objects, methods, functions 2 Dependencies / Relations between components, services, applications, systems 3 Communication behavior (dependency analysis) 3 Trace-based testing (passive testing) 3 Protocol based (sequence diagrams, message sequence charts) 4

White-box Control flow-based Coverage Statements (C0), nodes 2 (internal structure, paths) (specification-based, Branches (C1), transitions, links, paths 3 model-based, Conditions, decisions (C2, C3) 4 code-based) Elementary comparison (MC/DC) 5 Interfaces (S1, S2) 4 Static metrics Cyclomatic complexity (McCabe) 4 Metrics (e.g. Halstead) 4 Data flow-based Read / Write access 3 Poster Test Design Methods on One Page (4) Def / Use criteria 5 Positive, valid cases Normal, expected behavior 1 Negative, invalid cases Invalid, unexpected behavior 3 Error handling 3 Exceptions 5

Fault-based Risk-based 2 Systematic failure analysis (Failure Mode and Effect Analysis, Fault Tree Analysis) 4 Attack patterns (e.g. by James A. Whittaker) 3 Error catalogs, bug taxonomies (e.g. by Boris Beizer, Cem Kaner) 4 Bug patterns: standard, well-known bug patterns or produced by a root cause analysis 3 Bug reports 2 Fault model dependent on used technology and nature of system under test 2 Test patterns (e.g. by Robert Binder), Questioning patterns (Q-patterns by Vipul Kocher) 3 Ad hoc, intuitive, based on experience, check lists 1 Error guessing 2 Exploratory testing, heuristics, mnemonics (e.g. by James Bach, Michael Bolton) 2 Fault injection 4 Mutation testing 5

Regression (selective retesting) Retest all 5 Retest by risk, priority, severity, criticality 2 Retest by profile, frequency of usage, parts which are often used 3 Retest changed parts 2 Retest parts that are influenced by the changes (impact analysis, dependency analysis) 5 The effort / difficulty for the test design methods or rather the resulting test intensity is subdivided into 5 levels: ƒ 1 very low, simple ƒ 2 low ƒ 3 medium ƒ 4 high ƒ 5 very high, complex

This division into levels is dependent on the factors given above for the selection of the test design methods and therefore can only be used as a first hint and guideline. A test design method also may be used continuously from “intuitive use” up to “100% complete use” as required.

In addition describe every test design method on one page to explain their basic message and intention. Page 16 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Example: Requirements-based with traceability matrix

Inventory tracking matrix Objectives / Test Cases Inventories Test Case 1 Test Case 2 Test Case 3 Test Case 4 Test Case 5 Test Case 6 Test Case 7 Test Case 8 Requirements Requirement 1 x Requirement 2 x x x Requirement 3 x x Features Feature 1 x x Feature 2 x x Feature 3 x x Use Cases Use Case 1 x x Use Case 2 x Use Case 3 x Objectives Objective 1 x Objective 2 x x x Objective 3 x Risks Risk 1 x x Risk 2 x x Risk 3 x x Page 17 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Example: Use case-based testing – Scenario testing

A scenario is a hypothetical story used to help a person think through a complex problem or system

Based e.g. on transaction flows, use cases, or sequence diagrams

A specific, i.e. more extreme kind of scenario testing is the so-called soap opera testing

Page 18 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Example: Soap opera testing

Ref.: Hans Buwalda: Soap Opera Testing, Better Software, February 2004 Page 19 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Example: Soap opera testing – Test objectives

Corresponds to the traceability matrix

Ref.: Hans Buwalda: Soap Opera Testing, Better Software, February 2004 Page 20 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Example: Equivalence class partitioning and boundary values

Goal: Create a minimum number of black box tests needed while still providing adequate coverage.

Two tests belong to the same equivalence class if you expect the same result (pass / fail) of each. Testing multiple members of the same equivalence class is, by definition, redundant testing.

Boundaries mark the point or zone of transition from one equivalence class to another. The program is more likely to fail at a boundary, so these are the best members of (simple, numeric) equivalence classes to use.

More generally, you look to subdivide a space of possible tests into relatively few classes and to run a few cases of each. You’d like to pick the most powerful tests from each class. Page 21 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Example: Equivalence class partitioning and boundary value analysis with 2 parameters

b bmax

bmin a aamin max b bmax # test cases for n parameters and one valid equivalence class: 4n + 1 bmin a aamin max b bmax # test cases for n parameters and one valid and one invalid equivalence class: 6n + 1 bmin a aamin max Page 22 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Example: Classification-tree method

Ref.: CTE XL, http://www.systematic-testing.com/ Page 23 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Example: State-based testing

State 1 Cond_1 / A

Condition

Cond_3 / C State 2 State Cond_1 Cond_2 Cond_3 Cond_4 State 1 2 / A 1 / N 1 / N 1 / N

State 2 2 / N 3 / B 2 / N 2 / N

Cond_2 / B State 3 3 / N 3 / N 1 / C 3 / D State 3 State table

S / X … Cond_4 / D new X is action (or event)

and Snew is the resulting new state; State transition diagram action N means “do nothing”

Page 24 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Example: Cause-effect graphing

Requires dependencies between parameters and can get very complicated and difficult to implement

Page 25 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Example: Combinatorial Testing (pairwise testing)

Given a system under test with 4 parameters A, B, C, and D ƒ Each parameter has 3 possible values ƒ Parameter A: a1, a2, a3 ƒ Parameter B: b1, b2, b3 ƒ Parameter C: c1, c2, c3 ƒ Parameter D: d1, d2, d3 ƒ A valid test input data set is e.g. {a2, b1 , c2 , d3}. Exhaustive testing would require 34 = 81 test cases

Only 9 test cases are already sufficient to cover all pairwise interactions of parameters

Page 26 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Example: Data flow-based defined / used paths ƒ defined (d) ƒ for example value assigned to a variable, initialized ƒ used (u) ƒ for example variable used in a calculation, predicate ƒ predicate-use (p-u) ƒ computation-use (c-u)

Test du-paths

Read / write access: “data source“ and “data sink“

Use it on different levels of abstraction: model, unit, integration, system Page 27 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology Example: Heuristics and mnemonics*

Boundary values CRUD (data cycles, database operation) ƒ Create, Read, Update, Delete HICCUPPS (oracle) ƒ History, Image, Comparable Products, Claims, User’s Expectations, Product, Purpose, Statutes SF DePOT (San Francisco Depot) (product element, coverage) ƒ Structure, Function, Data, Platform, Operations, Time CRUSSPIC STMPL (quality criteria) ƒ Capability, Reliability, Usability, Security, Scalability, Performance, Installability, Compatibility, Supportability, Testability, Maintainability, Portability, Localizability FDSFSCURA (testing techniques) ƒ Function testing, Domain testing, , Flow testing, Scenario testing, User testing, Risk testing, Claims testing, Automatic Testing FCC CUTS VIDS (application touring) ƒ Feature tour, Complexity tour, Claims tour, Configuration tour, User tour, Testability tour, Scenario tour, Variability tour, Interoperability tour, Data tour, Structure tour

*Ref.: James Bach, Michael Bolton, Mike Kelly, and many more Page 28 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology T e s t D e s i g n M e t h o d s o n O n e P a g e

Black-box Standards (e.g. ISO/IEC 9126, IEC 61508), norms, (formal) specifications, claims 3 (models, interfaces, data) Requirements-based with traceability matrix (requirements x test cases) 3 Use case-based testing (sequence diagrams, activity diagrams) 3 CRUD (Create, Read, Update, Delete) (data cycles, database operations) 3 Flow testing, scenario testing, soap opera testing 4 User / Operational profiles: frequency and priority / criticality (Software Reliability Engineering) 4 Statistical testing (markov chains) 4 Random (monkey testing) 4 Features, functions, interfaces 1 Design by contract (built-in self test) 3 Equivalence class partitioning 2 Domain partitioning, category-partition method 4 Classification-tree method 3 Boundary value analysis 2 Special values 1 Test catalog / matrix for input values, input fields 5 State-based testing (Final State Machines) 3 Cause-effect graphing 5 Decision tables, decision trees 5 Syntax testing (grammar-based testing) 4 Combinatorial testing (pair-wise, orthogonal / covering arrays, n-wise) 3 Time cycles (frequency, recurring events, test dates) 4 Evolutionary testing 5

Grey-box Dependencies / Relations between classes, objects, methods, functions 2 Dependencies / Relations between components, services, applications, systems 3 Communication behavior (dependency analysis) 3 Trace-based testing (passive testing) 3 Protocol based (sequence diagrams, message sequence charts) 4

White-box Control flow-based Coverage Statements (C0), nodes 2 (internal structure, paths) (specification-based, Branches (C1), transitions, links, paths 3 model-based, Conditions, decisions (C2, C3) 4 code-based) Elementary comparison (MC/DC) 5 Interfaces (S1, S2) 4 Static metrics Cyclomatic complexity (McCabe) 4 Metrics (e.g. Halstead) 4 Data flow-based Read / Write access 3 Guidelines and experiences (1) Def / Use criteria 5 Positive, valid cases Normal, expected behavior 1 Negative, invalid cases Invalid, unexpected behavior 3 Error handling 3 Exceptions 5

Fault-based Risk-based 2 Systematic failure analysis (Failure Mode and Effect Analysis, Fault Tree Analysis) 4 Attack patterns (e.g. by James A. Whittaker) 3 Error catalogs, bug taxonomies (e.g. by Boris Beizer, Cem Kaner) 4 Bug patterns: standard, well-known bug patterns or produced by a root cause analysis 3 Bug reports 2 Fault model dependent on used technology and nature of system under test 2 Test patterns (e.g. by Robert Binder), Questioning patterns (Q-patterns by Vipul Kocher) 3 Ad hoc, intuitive, based on experience, check lists 1 Error guessing 2 Exploratory testing, heuristics, mnemonics (e.g. by James Bach, Michael Bolton) 2 Fault injection 4 Mutation testing 5

Regression (selective retesting) Retest all 5 Retest by risk, priority, severity, criticality 2 Retest by profile, frequency of usage, parts which are often used 3 Retest changed parts 2 Retest parts that are influenced by the changes (impact analysis, dependency analysis) 5 For beginners ƒ perhaps you are confused about the many test design methods ƒ start simple, step by step ƒ ask for help and advice by an experienced colleague, coach or consultant

For advanced, experienced testers (and developers!) ƒ check your current approach against this poster, think twice, and improve incrementally

Use the poster as a checklist for existing test design methods

Selection of test design methods is dependent on the context! So, you should adapt the poster to your specific needs.

Page 29 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology T e s t D e s i g n M e t h o d s o n O n e P a g e

Black-box Standards (e.g. ISO/IEC 9126, IEC 61508), norms, (formal) specifications, claims 3 (models, interfaces, data) Requirements-based with traceability matrix (requirements x test cases) 3 Use case-based testing (sequence diagrams, activity diagrams) 3 CRUD (Create, Read, Update, Delete) (data cycles, database operations) 3 Flow testing, scenario testing, soap opera testing 4 User / Operational profiles: frequency and priority / criticality (Software Reliability Engineering) 4 Statistical testing (markov chains) 4 Random (monkey testing) 4 Features, functions, interfaces 1 Design by contract (built-in self test) 3 Equivalence class partitioning 2 Domain partitioning, category-partition method 4 Classification-tree method 3 Boundary value analysis 2 Special values 1 Test catalog / matrix for input values, input fields 5 State-based testing (Final State Machines) 3 Cause-effect graphing 5 Decision tables, decision trees 5 Syntax testing (grammar-based testing) 4 Combinatorial testing (pair-wise, orthogonal / covering arrays, n-wise) 3 Time cycles (frequency, recurring events, test dates) 4 Evolutionary testing 5

Grey-box Dependencies / Relations between classes, objects, methods, functions 2 Dependencies / Relations between components, services, applications, systems 3 Communication behavior (dependency analysis) 3 Trace-based testing (passive testing) 3 Protocol based (sequence diagrams, message sequence charts) 4

White-box Control flow-based Coverage Statements (C0), nodes 2 (internal structure, paths) (specification-based, Branches (C1), transitions, links, paths 3 model-based, Conditions, decisions (C2, C3) 4 code-based) Elementary comparison (MC/DC) 5 Interfaces (S1, S2) 4 Static metrics Cyclomatic complexity (McCabe) 4 Metrics (e.g. Halstead) 4 Data flow-based Read / Write access 3 Guidelines and experiences (2) Def / Use criteria 5 Positive, valid cases Normal, expected behavior 1 Negative, invalid cases Invalid, unexpected behavior 3 Error handling 3 Exceptions 5

Fault-based Risk-based 2 Systematic failure analysis (Failure Mode and Effect Analysis, Fault Tree Analysis) 4 Attack patterns (e.g. by James A. Whittaker) 3 Error catalogs, bug taxonomies (e.g. by Boris Beizer, Cem Kaner) 4 Bug patterns: standard, well-known bug patterns or produced by a root cause analysis 3 Bug reports 2 Fault model dependent on used technology and nature of system under test 2 Test patterns (e.g. by Robert Binder), Questioning patterns (Q-patterns by Vipul Kocher) 3 Ad hoc, intuitive, based on experience, check lists 1 Error guessing 2 Exploratory testing, heuristics, mnemonics (e.g. by James Bach, Michael Bolton) 2 Fault injection 4 Mutation testing 5

Regression (selective retesting) Retest all 5 Retest by risk, priority, severity, criticality 2 Retest by profile, frequency of usage, parts which are often used 3 Retest changed parts 2 Retest parts that are influenced by the changes (impact analysis, dependency analysis) 5 Pick up this poster and ƒ give it to every developer and tester in your team or ƒ put it on the wall in your office or ƒ make it the standard screensaver or desktop background for all team members or even use the testing on the toilet approach by Google (see http://googletesting.blogspot.com/) …

The poster increases visibility and importance of test design methods, especially also for developers to improve

The poster facilitates a closer collaboration of testers and developers: you have something to talk about ...

Page 30 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology T e s t D e s i g n M e t h o d s o n O n e P a g e

Black-box Standards (e.g. ISO/IEC 9126, IEC 61508), norms, (formal) specifications, claims 3 (models, interfaces, data) Requirements-based with traceability matrix (requirements x test cases) 3 Use case-based testing (sequence diagrams, activity diagrams) 3 CRUD (Create, Read, Update, Delete) (data cycles, database operations) 3 Flow testing, scenario testing, soap opera testing 4 User / Operational profiles: frequency and priority / criticality (Software Reliability Engineering) 4 Statistical testing (markov chains) 4 Random (monkey testing) 4 Features, functions, interfaces 1 Design by contract (built-in self test) 3 Equivalence class partitioning 2 Domain partitioning, category-partition method 4 Classification-tree method 3 Boundary value analysis 2 Special values 1 Test catalog / matrix for input values, input fields 5 State-based testing (Final State Machines) 3 Cause-effect graphing 5 Decision tables, decision trees 5 Syntax testing (grammar-based testing) 4 Combinatorial testing (pair-wise, orthogonal / covering arrays, n-wise) 3 Time cycles (frequency, recurring events, test dates) 4 Evolutionary testing 5

Grey-box Dependencies / Relations between classes, objects, methods, functions 2 Dependencies / Relations between components, services, applications, systems 3 Communication behavior (dependency analysis) 3 Trace-based testing (passive testing) 3 Protocol based (sequence diagrams, message sequence charts) 4

White-box Control flow-based Coverage Statements (C0), nodes 2 (internal structure, paths) (specification-based, Branches (C1), transitions, links, paths 3 model-based, Conditions, decisions (C2, C3) 4 code-based) Elementary comparison (MC/DC) 5 Interfaces (S1, S2) 4 Static metrics Cyclomatic complexity (McCabe) 4 Metrics (e.g. Halstead) 4 Data flow-based Read / Write access 3 Summary Def / Use criteria 5 Positive, valid cases Normal, expected behavior 1 Negative, invalid cases Invalid, unexpected behavior 3 Error handling 3 Exceptions 5

Fault-based Risk-based 2 Systematic failure analysis (Failure Mode and Effect Analysis, Fault Tree Analysis) 4 Attack patterns (e.g. by James A. Whittaker) 3 Error catalogs, bug taxonomies (e.g. by Boris Beizer, Cem Kaner) 4 Bug patterns: standard, well-known bug patterns or produced by a root cause analysis 3 Bug reports 2 Fault model dependent on used technology and nature of system under test 2 Test patterns (e.g. by Robert Binder), Questioning patterns (Q-patterns by Vipul Kocher) 3 Ad hoc, intuitive, based on experience, check lists 1 Error guessing 2 Exploratory testing, heuristics, mnemonics (e.g. by James Bach, Michael Bolton) 2 Fault injection 4 Mutation testing 5

Regression (selective retesting) Retest all 5 Retest by risk, priority, severity, criticality 2 Retest by profile, frequency of usage, parts which are often used 3 Retest changed parts 2 Retest parts that are influenced by the changes (impact analysis, dependency analysis) 5 There exist many different methods for adequate test design. When looking into practice often these test design methods are used only sporadically and in a non-systematical way.

The poster Test Design Methods on One Page containing a systematic, structured, and categorized overview about test design methods will help you to really get them used in practice in your projects. ƒ Do not miss breadth because of too much depth.

This will result in better and smarter testing

Page 31 May 7, 2008 Peter Zimmerer, CT SE 1 © Siemens AG, Corporate Technology