An Ideal Architecture for Model-Based Verification & Test Systems

Total Page:16

File Type:pdf, Size:1020Kb

An Ideal Architecture for Model-Based Verification & Test Systems

An Ideal Architecture for Model-based Verification & Test Systems

David Gelperin ([email protected]) Software Quality Engineering (stickyminds.com, sqe.com) Draft 1.8 July 12, 2001

This position statement records my biases and is meant to catalyze reflection, debate, discussion and research in order to develop some consensus on the characteristics of an ideal system. It is hoped that this consensus will stimulate and influence the development of both research and commercial systems. Suggestions for change are solicited and appreciated.

1. System Architecture

The following diagram shows an “ideal” architecture for a MbV&T system:

1. Requirements Specifier

Essential models of software usage, behavior, & 2. Model structure in the Test- Checker Essential UML Scripts (complete & incomplete) for Checked software 3. Auto Test execution engine specs Designer

Software Functional & Usage 4. Auto Scripter Designer test specs in “4th generation” test language Scripts (complete & Additional test specs in incomplete) for Manual Scripter “4th generation” test execution engine language

Manual Scripts Manual Test Designer

Each of the numbered boxes is a piece of automation, possibly Non-Functional, Design, interactive. The primary results of each automated process become the & Code-based, test primary input to its succeeding automated process. objectives

Page 1 of 10 This architecture supports “all at once” development of a test suite. Test suites can also be developed incrementally based on both a model and execution outcome information about the partial suite --- in particular, the outcome of the most recently produced test case [TA2]. Initial model development and checking would be the same for an incremental suite, but the development strategy would be different.

2. Assumptions

 In important situations, effective software development strategies entail the development of precise models of behavior, structure, and usage. This is true at both product and component levels.

 In these situations, a Model-based Verification and Test (MbV&T) System can be very cost-effective at supporting the development and debugging of these models as well as automatically generating executable function and usage tests.

 Widespread use of model-based test approaches requires a strong collection of commercial offerings which in turn requires a market sufficient to ensure adequate profits for cost-effective systems.

 Achieving such a market depends on: (1) being applicable to a broad range of software (2) inclusion of mainstream modeling techniques (3) seamless integration with software design systems (4) user-friendly interfaces and ultra-understandable outputs (5) spectacular system operation and results including reduced development costs and increased reliability

 Data flow between the components would be in XML Metadata Interchange (XMI) Format. [see http://www-4.ibm.com/software/ad/standards/xmi.html]

3. Criteria for Ideal

The description of this system architecture provides a framework for specifying the characteristics of an ideal MbV&T system. Characteristics are ideal based on psychological (e.g., usability) and economic factors. Some of the economic factors are influenced by psychological ones e.g., unusable systems are a poor investment.

Psychological factors include ease of learning and use, helpfulness of results, and clarity of individual payoff.

The major economic factor from the customer perspective is payoff i.e., is it sufficient to support the customer’s investment? From the supplier perspective, the major factor is profit i.e., will pricing and demand allow sufficient profit?

Page 2 of 10 4. General Characteristics

Some ideal characteristics apply to more than one component, but our intent is not to create a list of characteristics that apply to components in general, but rather list those that have special import to a cost-effective MbV&T system. These general characteristics include:

GC1. The association between each test and its precursors in the essential models must be maintained to provide traceability in both directions. This means that changes either to tests or essential models require updating of the associations.

GC2. Each automated component requires control/constraint (secondary) input [not shown in the architecture diagram] as well as the primary input of software or test specifications. Secondary inputs should be separated from primary inputs.

GC3. Messages and reports should be very clear and helpful in suggesting ways to fix and/or avoid problems.

GC4. All major outputs should be controlled by an integrated configuration management (CM) system that “understands” their nature and relationships. This may be a custom CM system or a smart wrapper on one or more existing CM, requirements management, and test management system(s).

5. Component Characteristics

We provide a description, a set of component-specific characteristics, and some implementation options for each system component.

5.1. Requirements Notation & Specifier

Description --

For mainstream acceptance, the basic specification notation should be grounded in the UML (the mainstream modeling notation) but limited to those elements that support Essential Modeling [RN9, RN12, RN1]. Because the UML is not test adequate, additional elements must be added. Only additions that are essential to automatic testing are necessary. This Test-Essential UML would include Action Contracts [RN4], Interface Structures [RN5], and Precise Use Cases [RN6].

The specifier is a process support system that assists in the development and modification of annotated specifications of software behavior, structure, and usage. Annotations supplement a basic model with information about usage, failure risk, and

Page 3 of 10 possibly test execution outcomes. The specifier provides specification patterns and templates along with strategic advice to support all aspects of specification modeling.

Ideal characteristics of notation & specifier --

Spec notation must: RNC1. be mainstream (i.e., based on UML) RNC2. include elements for essential modeling of usage, behavior and structure RNC3. support automated verification and automated test design i.e., be test adequate RNC4. be ultra-understandable to enable reliable validation via review RNC5. be a seamless precursor to mainstream software design

Specifier must: RSC1. be a true process support system including abilities to –  specify test adequate models  add layers of supplementary information about usage, failure risk, and execution outcomes to the models  flag violations of modeling guidelines specified by the client  translate any special client terminology into normal form logical expressions  expand abstract terminology (e.g., derived attributes) into its associated basic expressions RSC2. verify the syntactic correctness, consistency and relative completeness of a spec RSC3. have the ability to simulate product usage by “animating” models.

Implementation Options --

1. Custom-built specifier 2. MetaCase based specifier (e.g., Meta Edit + [www.metacase.com/index.html] or ObjectMaker TDK [www.markv.com/markv.com/objectmaker.htm]) 3. Restricted UML modeler incorporating test-essential extensions

5.2. Model Checker

Description --

Supports the specification of required properties or constraints (e.g., safety-oriented operational constraints such as a traffic light can not be green in all directions) that are then shown by the checker to be provable from the current model or unprovable via the generation of a counter-example.

Ideal characteristics of checker --

Page 4 of 10 Checker must: MCC1. proactively support (with examples and advice) the process of constraint specification and result determination. MCC2. trace the derivation of each proof, so that incorrect information in the spec can be quickly located when unprovable properties are proven.

Implementation Options --

1. Custom-built checker 2. Existing model-checker (e.g., Spin or FormalCheck [productized SMV - www.cadence.com/datasheets/formalcheck.html]) with a wrapper

5.3. Automatic Test Designer

Description --

The Test Designer uses – (1) information about behavior (e.g., pre & post conditions), structure, and usage (including probabilities) from the specification model, (2) knowledge of a 4th Generation Test Language (4GTL) that uses the terminology of the specification model, and (3) knowledge of sound test design principles to design 4GTL test cases and procedures that create pre-conditions and check post- conditions. 4GTL refers to any test specification language (analogous to 4GL’s for software development, e.g. report writers) that specify the essential aspects of a test in high-level abstractions.

The products of the automatic test designer are of three types (in any mix): (1) sufficient for an automated scripter to produce cost-effective, automated test scripts that are complete i.e., setup all pre-conditions and check all post- conditions, or incomplete i.e., setup all preconditions and possibly check some post-conditions, (2) sufficient for a manual scripter to produce cost-effective, automated scripts that are complete or incomplete or (3) sufficient for a manual scripter to produce cost-effective, manual scripts.

Ideal characteristics of automatic designer & test specs --

Automatic designer must: ADC1. provide guidance (e.g., estimates) in the selection of parameters that determine the number and character of the tests to be generated ADC2. trace the derivation of each test, so that incorrect tests can quickly lead to incorrect information in the spec or bugs in the derivation routines

Page 5 of 10 Generated test specs must: ADC3. be easily supplemented, since no automated designer generates all tests that need to be run ADC4. be ultra-understandable (i.e., specified in a 4GTL) to enable reliable validation via review

Implementation Options --

1. Custom-built automatic test designer

5.4. Automatic Scripter

Description --

The scripter creates scripts for specific execution engines (e.g., Mercury, Rational, Segue) by using application interface maps and terminology translation tables to translate the 4GTL specs to the concrete scripts required by execution engines.

Ideal characteristics of scripter --

The scripter must: ASC1. proactively support the creation of terminology translation tables via test spec scans that prime the tables by loading in the terminology and identifying the type of each entry.

Implementation Options --

1. Custom-built scripter

5.5. Downstream Components

Description --

System should have the ability to instrument design (e.g., components) and implementation (e.g., lines of code) level elements, monitor their execution, and report those that are not executed.

The report of unexecuted elements would be passed to a manual test designer, who would translate these design and implementation based test objectives into scenario or function based objectives by asking “what scenario or variant of a function must be tested in order to execute these uncovered elements?”.

This translation may uncover defects in the design or implementation by revealing elements that can never be executed. If execution is possible, the translation may

Page 6 of 10 uncover defects in the specs or weaknesses in the test design process. When execution is possible, new objectives would lead to new test specs, either manually [See bottom right corner of the architecture diagram] or automatically designed.

6. Business History of Automatic Test Generation Tools

We can distinguish levels success:

Level Definition Examples 1. Commercial Generates a (a) substantial or (b) a. None Product marginal profit as a stand alone b. T-Vec product TestMaster ToolSet_Certify 2. Commercial Generates sufficient profit as a StP/T ? Component component in or add-on to a tool SoftTest ? suite 5 Use Case products ? 2 UI products ? 3. Strategic Adds value to a consulting practice TestFrame 4. Research Demonstrates feasibility of a Many state-machine technical approach based systems

History suggests that incorporating a test generation tool into a suite significantly increases the likelihood of commercial success.

7. Knowns and Unknowns

We know:

 there are over 2 dozen commercial, UML-compliant, modeling systems [RS3]. (Some will not survive.)

 essential models are adequate for model checking and logical test design [RS2, RS4, MC2].

 state-based, tabular-format, research tools can be successful specifiers [RS5], linked to model checkers [MC2], and execution-level script designer/generators [TD2].

 SCR (1978) has been around over 20 years and still has a very small user base

 a 4GTL can be very effective for test specification [SC1, SC2].

We do NOT know:

 the exact composition of a test-adequate UML

Page 7 of 10  if a test-adequate UML can effectively model a broad range of systems, seamlessly integrate with software design systems, and provide ultra-understandable results for review.

 the exact content and format of a 4GTL spec

 if a test designer can produce 4GTL results that can be easily translated into executable scripts.

 if an MbV&T system with these (or any other) characteristics can attract a substantial number of users.

8. Acknowledgements

Thanks to Ian Craggs, Alan Jorgensen and Vera Johnson for careful review and thoughtful suggestions.

Thanks to Alan Hartman and his co-authors for providing an alternative architecture [TA2].

9. References

Testing Architectures --

TA1. Blackburn, Mark R., Busser, Robert, Nauman, Aaron, "Removing Requirement Defects and Automating Test" Proceedings of STAR East 2001 Software Quality Engineering

TA2. Craggs, Ian and Griffiths, Ian "Model-Based Testing for Large-Scale Industrial Software" IBM Technical Report.

TA3. Gronau, Ilan, Hartman, Alan, Kirshin, Andrei, Nagin, Kenneth, Olvovsky, Sergey, "A Methodology and Architecture for Automated Software Testing" IBM Research Lab in Haifa, Technical Report. Available at http://www.haifa.il.ibm.com/projects/verification/gtcb/publications.html

Requirements Notation --

RN1. Constantine, Larry L. and Lockwood, Lucy A. D., Software Use: A Practical Guide to the Models & Methods of Usage-Centered Design, ACM Press 1999 [Essential Modeling]

RN2. Douglass, Bruce Powel Real-Time UML: Developing Efficient Objects for Embedded Systems Addison-Wesley 2000 [UML]

RN3 Faulk, S. R., Brackett, J., Ward, P., and Kirby, J., Jr. “The CoRE method for real-time requirements” IEEE Software Vol. 9, No. 5 (Sept. 1992) pp. 22-33 [CoRE]

RN4. Gelperin, David “Specifying Consequences with Action Contracts” Available in the U3 Modeling Resource Kit at http://www.stickyminds.com — Use search term U3 [Action Contracts]

Page 8 of 10 RN5. Gelperin, David “Specifying Software with If-Then Tables: What, Why, & How” Available in the U3 Modeling Resource Kit at http://www.stickyminds.com — Use search term U3 [Tables]

RN6. Gelperin, David “Ultra-Understandable Usage (U3) Modeling Resource Kit” Available at http://www.stickyminds.com — Use search term U3 [Usage Modeling]

RN7. Janicki, R., Parnas, D. L., and J. Zucker “Tabular Representations in Rational Documents” CRL Report No. 313 1995. Available at http://www.crl.mcmaster.ca/SERG/serg.publications.html [Tables]

RN8. Leveson, Nancy G., Mats P.E. Heimdahl, and Jon Damon Reese, “Designing Specification Languages for Process Control Systems: Lessons Learned and Steps to the Future” SIGSOFT FOSE '99 (Foundations of Software Engineering), Toulouse, September 1999 Available at http://sunnyday.mit.edu/papers.html [Modeling]

RN9. McMenamin, Stephen, Palmer, John Essential Systems Analysis Prentice-Hall 1984 [Essential Modeling]

RN10. Paterno, Fabio Model-Based Design and Evaluation of Interactive Applications Springer Applied Computing 1999 [Task Modeling]

RN11. Schmuller, Joseph SAMS Teach Yourself UML in 24 Hours Sams Press 1999 [UML]

RN12. Ward, P.T and Mellor, S.J. Structured Development for Real-Time Systems, Vol. 2: Essential Modeling Techniques Yourdon Press, 1985 [Essential Modeling]

Requirements Specifier --

RS1. Bharadwaj, R. and C. Heitmeyer, "Developing High Assurance Avionics Systems with the SCR Requirements Method" in Proc. 19th Digital Avionics Systems Conference, 7-13 October 2000, Philadelphia, PA. Available at http://chacs.nrl.navy.mil/personnel/heitmeyer.html

RS2. Checkik, Marsha “SC(R)3: Towards Usability of Formal Methods” Proceedings of CASCON'98, pp. 177--189, November 1998. Available at http://www.cs.toronto.edu/~chechik/pubs/cascon98.ps

RS3. Gelperin, David “Commercial Tools for Software/System Modeling” Available in the U3 Modeling Resource Kit at http://www.stickyminds.com — Use search term U3

RS4. Heitmeyer, Constance L., Ralph D. Jeffords, and Bruce G. Labaw, "Automated Consistency Checking of Requirements Specifications," ACM Trans. on Software Eng. and Methodology 5, 3, July 1996, 231-261. Available at http://chacs.nrl.navy.mil/personnel/heitmeyer.html

RS5. C. Heitmeyer, J. Kirby, B. Labaw and R. Bharadwaj, "SCR*: A Toolset for Specifying and Analyzing Software Requirements" Proc. Computer-Aided Verification, 10th Ann. Conf. (CAV'98), Vancouver, Canada, 1998. Available at http://chacs.nrl.navy.mil/personnel/heitmeyer.html

Model Checker --

MC1. Chan, William Richard J. Anderson, Paul Beame, Steve Burns, Francesmary Modugno, David Notkin, and Jon D. Reese. Model checking large software specifications. In IEEE Transactions on Software Engineering 24(7), pages 498-520, July 1998.

Page 9 of 10 MC2. Heitmeyer, Constance L., Kirby, James Jr., Labaw, Bruce, Archer, Myla, Bharadwaj, Ramesh, "Using Abstraction and Model Checking to Detect Safety Violations in Requirements Specifications" IEEE Transactions on Software Engineering, vol. 24, no. 11, November 1998. Available at http://chacs.nrl.navy.mil/personnel/heitmeyer.html

MC3. Holzmann, Gerard J., “The Spin Model Checker”, IEEE Trans. on Software Engineering, Vol. 23, No. 5, May 1997, pp. 279-295.

Test Designer --

TD1. Ammann, Paul E., Black, Paul E., and Majurski, William “Using Model Checking to Generate Tests from Specifications”. Available at http://hissa.nist.gov/~black/Papers/icfem98.ps.gz

TD2. Blackburn, Mark “Automatic Generation of Test Vectors for SCR-style Specifications” Proceedings of Compass 1997. Available in Publications at www.t-vec.com.

TD3. Blackburn, Mark “Using Models for Test Generation and Analysis” Proceedings of Digital Avionics System Conference, 1998. Available in Publications at www.t-vec.com.

TD4. Gargantini, A. and C. Heitmeyer, "Using Model Checking to Generate Tests from Requirements Specifications" Proc., Joint 7th Eur. Software Engineering Conf. and 7th ACM SIGSOFT Intern. Symp. on Foundations of Software Eng. (ESEC/FSE99), Toulouse, FR, Sept. 6-10, 1999. Available at http://chacs.nrl.navy.mil/personnel/heitmeyer.html

TD5. Statezni, David “A Look at T-Vec’s Test Vector Generation System” STQE Vol. 3, No. 3, May/June 2001.

Script Generator --

SC1. Buwalda, Hans “Testing with Action Words” in Fewster, Mark and Graham, Dorothy Software Test Automation Addison Wesley 1999

SC2. Buwalda, Hans and Kasdorp, Maartje “Getting Automated Testing Under Control” Software Testing & Quality Engineering Vol. 1, No. 6 Nov./Dec. 1999

SC3. Nagle, Carl Interest Group on Data Driven Testing at http://groups.yahoo.com/group/RobotDDEUsers

Page 10 of 10

Recommended publications