USER's GUIDE Version 6.4 New Editions of This Guide Incorporate All Material Added Or Changed Since the Previous Edition

Total Page:16

File Type:pdf, Size:1020Kb

USER's GUIDE Version 6.4 New Editions of This Guide Incorporate All Material Added Or Changed Since the Previous Edition USER'S GUIDE Version 6.4 New editions of this guide incorporate all material added or changed since the previous edition. Update packages may be used between editions. The manual printing date changes when a new edition is printed. The contents and format of this manual are subject to change without notice. Generated: 10/1/2017, 8:42 PM Rev: cb0043d Part Number: User's Guide for VectorCAST/Manage v.6.4 VectorCAST is a trademark of Vector Software, Inc. © Copyright 2017, Vector Software, Inc. All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any informational storage and retrieval system, without written permission from the copyright owner. U.S. Government Restricted Rights This computer software and related documentation are provided with Restricted Rights. Use, duplication or disclosure by the Government is subject to restrictions as set forth in the governing Rights in Technical Data and Computer Software clause of DFARS 252.227-7015 (June 1995) and DFARS 227.7202-3(b). Manufacturer is Vector Software, Inc. East Greenwich RI 02818, USA. Vector Software reserves the right to make changes in specifications and other information contained in this document without prior notice. Contact Vector Software to determine whether such changes have been made. Third-Party copyright notices are contained in the file: 3rdPartyLicenses.txt, located in the VectorCAST installation directory. 2 Table of Contents INTRODUCTION 6 VectorCAST/Manage Overview 7 The VectorCAST/Manage Project 9 WORK FLOW SCENARIOS 12 Using Manage in Your Organization 13 Single User Setting Up an Initial Manage Project 14 Create An Empty Project 14 Customize the Configuration 14 Create a New Environment 15 Add Test Cases 16 Sharing a Manage Project With Your Team 19 Put a Project Under Source Code Management 19 Add New Unit Test Environment 19 Create Automatic Regression Test Script 20 Performing Change-Based Testing 22 Modify Source Code 22 Perform Incremental Build/Execute 22 Testing with Multiple Configurations 24 Add a New Compiler Configuration 24 Build / Execute the New Compiler 24 Add a New Environment to the Configuration 25 Sharing Tests and Results Between Multiple Users 27 Import Results from Master Project 27 Fix Test Failures Locally 28 Commit Changes to Source Code Management 30 Using Imported Results With Change-Based Testing 32 Import Results 32 Make Modifications 32 Perform Incremental Rebuild 33 Sharing Testing Trends Using Reports and Graphs 34 Store Results and Generate Report 34 Modify Graph Options 34 Building a Manage Project From Existing Environments 37 Create a New Project 37 Managing Configuration Options for Multiple Environments 40 Set Up Common Configuration Options 40 Clear Elevated Configuration Options 41 3 Add a New Test Suite Node and Reconfigure Options 42 Creating a Compiler Node Using an Existing .CFG File 44 Load the .CFG File 44 Open the Configuration Editor 44 Change-Based Testing with VectorCAST/QA 46 Using the Python Script 46 Incremental Build and Execute 49 VECTORCAST/MANAGE TOOL REFERENCE 52 The Manage Project Structure 53 The project.vcm File 53 Project Directory 53 Environment Directory 53 Using the Manage Interface 55 Understanding The Management Tree 55 Understanding the Files Tab 61 The Project Editor 70 The Messages Window 70 Using the Jobs Window and Job Monitor 73 The Jobs Window 73 Opening the Job Monitor 74 Job Status Viewer 75 Manage Status Viewer 77 Project-Related Tasks 80 Creating an Empty Manage Project 80 Setting Default Options for Your Project 80 Creating a New Unit Test Environment 82 Creating a Manage Project From Existing VectorCAST Environments 84 Opening a Manage Project 87 Option Factoring in the Project Wizard 88 Building and Executing Tests 88 Configuration-Based Test Cases 90 Target Setup and Teardown 93 Opening a Test Environment 95 Automating Test Script Maintenance 96 Storing Results 101 Using the Project Update Editor 102 Editing and Modifying Configurations 105 Creating Specialized Environment Configurations 110 Adding a Compiler 113 Add an Environment to the Project Tree 115 Disabling/Enabling an Environment 117 Renaming an Environment 118 Using Monitored Environments in a Manage Project 119 Understanding the Project Lock 120 Integrating VectorCAST/Manage With Your SCM System 122 4 Understanding Workspaces 122 Working With Imported Results 124 Manage Reports 129 Trend Reporting 129 Summary Report 134 Code Coverage Summary 139 Original Source Coverage Report 142 Manage Data Summary 146 Change-Based Testing 151 VectorCAST/QA - System Test Automation 152 The Python Configuration Script 152 Add a Test Case Using Python Configuration Script 153 Add a Manual Test 154 Incremental Build and Execute 156 Interactive Execution of Tests 158 Component Coverage 160 Change Impact Report 164 Python API 167 Manage Python Global Functions 167 Manage Python Classes 167 Exception Errors 176 Example Python Script 179 VectorCAST/Manage and Jenkins Integration 181 COMMAND LINE INTERFACE 182 Introduction 183 Running a Script 183 Running a Manage Script With Multiple Commands 184 The Manage Tutorial Revisited 186 Creating the Project 186 Creating an Environment Group 187 Creating Source, Platform, Compiler and Test Suite Nodes 188 Importing Test Environments 188 Adding an Environment Group to a Test Suite 190 Building and Executing the Project 190 Storing, Listing, and Deleting Results 191 Generating Reports 192 Duplicating a Test Suite 194 Setting Coverage Type on a Test Suite 195 Example Script (.bat) 197 INDEX 199 5 Introduction VECTORCAST/MANAGE OVERVIEW 7 VECTORCAST/MANAGE OVERVIEW VectorCAST/Manage is an extension of the VectorCAST family of unit and integration testing tools. VectorCAST/Manage allows tests and test results to be shared across the enterprise, providing a single point-of-control for all unit- and integration-test activities. At-a-glance logs, summary reports, and color- coded pass/fail criteria highlight the status of each test within the regression suite. Within a Manage Project, Test Environments can be grouped into larger “Environment Groups” and “Test Suites.” Environments can be members of multiple Environment Groups, and Environment Groups can be assigned to multiple Test Suites. This enables users to structure their VectorCAST/Manage project to match the architecture of their application. For example, the application sub-systems will map onto Manage Test Suites or Environment Groups, and individual source files will map onto Manage Test Environments. Because Environment Groups and Test Suites can be easily duplicated, the same tests can be run using various source baselines, on different host platforms, or with a different compilers. The integrated SQL database and graphing facility in VectorCAST/Manage enables users to view historical trend data for an individual software component, or any group of software components. This makes it easy to analyze regression trends across the software-testing life cycle. VECTORCAST/MANAGE OVERVIEW 8 The Manage Status Panel and HTML reports enable users to view the current status of each test case. Data is automatically recorded for build status and duration, test execution status and duration, and code coverage achieved. Using the integrated Python interpreter, additional comparisons can be added for each component and a column for the comparisons added to the report. For example, a user might want to compare the test execution time to some threshold or might want to run a static analysis tool such as “Lint” on the source files in the Manage project. The Test Execution Module allows tests to be distributed across multiple physical or virtual servers, with all results from the remotely executed tests incorporated in the VectorCAST/Manage Database. The VectorCAST/Manage project allows multiple users simultaneous access to run tests and to view status and results. The entire development team can use VectorCAST/Manage. l Program Managers use the high-level reports and graphs to track testing progress. l QA Managers use the status panel and reports to track release readiness. l QA Engineers control multiple test configurations and test scheduling. l Software Engineers create new tests, and identify and fix bugs. l Software Architects control the over-all design of the Manage project as well as controlling the peer review and change control process. l Engineers use the tool to easily design test campaigns and monitor release readiness. THE VECTORCAST/MANAGE PROJECT 9 THE VECTORCAST/MANAGE PROJECT A VectorCAST/Manage project is the set of all files, directories and test components managed and controlled by VectorCAST/Manage. A Manage project is made up of the project file (yourprojectname.vcm) and the project directory. The project file is an XML file and is typically kept under source code control. The project directory is the directory in which Manage stores the files to build and execute environments and the SQL historical database. It is named the same as the project. A Manage project can be created from a set of existing VectorCAST Test Environments, using the New Project Wizard, or as an empty project. The interface of a typical VectorCAST/Manage project is shown below. THE VECTORCAST/MANAGE PROJECT 10 The project tree for VectorCAST/Manage is displayed in the upper left hand corner of the application window. It depicts a hierarchical view of the components of the project as a set of nodes on a tree. The following is a list of Project Tree nodes from the root or top-most level to the leaves or bottom-most level: l The root node is the overarching container that contains all the other nodes within a project. There is a single root node for each project. Its name reflects the name of the project. l The source node represents all of the code for a given baseline to be tested. l The platform node represents the platform of the data sources for the environments under that node.
Recommended publications
  • Opportunities and Open Problems for Static and Dynamic Program Analysis Mark Harman∗, Peter O’Hearn∗ ∗Facebook London and University College London, UK
    1 From Start-ups to Scale-ups: Opportunities and Open Problems for Static and Dynamic Program Analysis Mark Harman∗, Peter O’Hearn∗ ∗Facebook London and University College London, UK Abstract—This paper1 describes some of the challenges and research questions that target the most productive intersection opportunities when deploying static and dynamic analysis at we have yet witnessed: that between exciting, intellectually scale, drawing on the authors’ experience with the Infer and challenging science, and real-world deployment impact. Sapienz Technologies at Facebook, each of which started life as a research-led start-up that was subsequently deployed at scale, Many industrialists have perhaps tended to regard it unlikely impacting billions of people worldwide. that much academic work will prove relevant to their most The paper identifies open problems that have yet to receive pressing industrial concerns. On the other hand, it is not significant attention from the scientific community, yet which uncommon for academic and scientific researchers to believe have potential for profound real world impact, formulating these that most of the problems faced by industrialists are either as research questions that, we believe, are ripe for exploration and that would make excellent topics for research projects. boring, tedious or scientifically uninteresting. This sociological phenomenon has led to a great deal of miscommunication between the academic and industrial sectors. I. INTRODUCTION We hope that we can make a small contribution by focusing on the intersection of challenging and interesting scientific How do we transition research on static and dynamic problems with pressing industrial deployment needs. Our aim analysis techniques from the testing and verification research is to move the debate beyond relatively unhelpful observations communities to industrial practice? Many have asked this we have typically encountered in, for example, conference question, and others related to it.
    [Show full text]
  • Addresssanitizer + Code Coverage Kostya Serebryany, Google Eurollvm 2014 New and Shiny -Fprofile-Instr-Generate
    AddressSanitizer + Code Coverage Kostya Serebryany, Google EuroLLVM 2014 New and shiny -fprofile-instr-generate ● Coming this year ● Fast BB-level code coverage ● Increment a counter per every (*) BB ○ Possible contention on counters ● Creates special non-code sections ○ Counters ○ Function names, line numbers Meanwhile: ASanCoverage ● Tiny prototype-ish thing: ○ Part of AddressSanitizer ○ 30 lines in LLVM, 100 in run-time ● Function- or BB- level coverage ○ Booleans only, not counters ○ No contention ○ No extra sections in the binary At compile time: if (!*BB_Guard) { __sanitizer_cov(); *BB_Guard = 1; } At run time void __sanitizer_cov() { Record(GET_CALLER_PC()); } At exit time ● For every binary/DSO in the process: ○ Dump observed PCs in a separate file as 4-byte offsets At analysis time ● Compare/Merge using 20 lines of python ● Symbolize using regular DWARF % cat cov.c int main() { } % clang -g -fsanitize=address -mllvm -asan-coverage=1 cov. c % ASAN_OPTIONS=coverage=1 ./a.out % wc -c *sancov 4 a.out.15751.sancov % sancov.py print a.out.15751.sancov sancov.py: read 1 PCs from a.out.15751.sancov sancov.py: 1 files merged; 1 PCs total 0x4850b7 % sancov.py print *.sancov | llvm-symbolizer --obj=a.out main /tmp/cov.c:1:0 Fuzzing with coverage feedback ● Test corpus: N random tests ● Randomly mutate random test ○ If new BB is covered -- add this test to the corpus ● Many new bugs in well fuzzed projects! Feedback from our customers ● Speed is paramount ● Binary size is important ○ Permanent & temporary storage, tmps, I/O ○ Stripping non-code
    [Show full text]
  • A Framework for Evaluating Performance of Software Testing Tools
    INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 9, ISSUE 02, FEBRUARY 2020 ISSN 2277-8616 A Framework For Evaluating Performance Of Software Testing Tools Pramod Mathew Jacob, Prasanna Mani Abstract: Software plays a pivotal role in this technology era. Due to its increased applicable domains, quality of the software being developed is to be monitored and controlled. Software organization follows many testing methodologies to perform quality management. Testing methodologies can be either manual or automated. Automated testing tools got massive acceptability among software professionals due to its enhanced features and functionalities than that of manual testing. There are hundreds of test automation tools available, among which some perform exceptionally well. Due to the availability of large set of tools, it is a herculean task for the project manager to choose the appropriate automation tool, which is suitable for their project domain. In this paper, we derive a software testing tool selection model which evaluates the performance aspects of a script-based testing tool. Experimental evaluation proves that, it can be used to compare and evaluate various performance characteristics of commercially accepted test automation tools based on user experience as well as system performance. Index Terms: Automated testing, Software testing, Test script, Testing Tool, Test bed, Verification and Validation. —————————— ◆ —————————— 1 INTRODUCTION S OFTWARE is advanced in recent days by enhancing its applicable domains. Software is embedded in almost all electronic gadgets and systems. In this scenario the quality of the software plays a significant role. The customer or end – user should be satisfied which is primarily depended on quality and capability of the software being developed.
    [Show full text]
  • Automated Testing of Firmware Installation and Update Scenarios for Peripheral Devices
    DEGREE PROJECT IN COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2019 Automated testing of firmware installation and update scenarios for peripheral devices DAG REUTERSKIÖLD KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Automated testing of firmware installation and update scenarios for peripheral devices DAG REUTERSKIÖLD Master in Computer Science Date: August 12, 2019 Supervisor: Hamid Faragardi Examiner: Elena Troubitsyna School of Electrical Engineering and Computer Science Host company: Tobii AB Swedish title: Automatisering av enhetsinstallation, uppdatering och testning med hjälp av virtuella maskiner iii Abstract This research presents an approach to transition from manual to automated testing of hardware specific firmware. The manual approach for firmware test- ing can be repetitive and time consuming. A significant proportion of the time is spent on cleaning and re-installing operating systems so that old firmware does not interfere with the newer firmware that is being tested. The approach in this research utilizes virtual machines and presents an automation framework. One component of the automation framework is an application to imitate con- nected peripheral devices to bypass hardware dependencies of firmware in- stallers. The framework also consists of automation and pipeline scripts with the objective to execute firmware installers and detect errors and abnormalities in the installation and updating processes. The framework can run on locally hosted virtual machines, but is most applicable using cloud hosted virtual ma- chines, where it is part of a continuous integration that builds, downloads, installs, updates and tests new firmware versions, in a completely automated manner. The framework is evaluated by measuring and comparing execution times with manually conducted installation and updating tests, and the result shows that the framework complete tests much faster than the manual approach.
    [Show full text]
  • GUI-Guided Test Script Repair for Mobile Apps
    This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication. The final version of record is available at http://dx.doi.org/10.1109/TSE.2020.3007664 1 GUI-Guided Test Script Repair for Mobile Apps Minxue Pan, Tongtong Xu, Yu Pei, Zhong Li, Tian Zhang, Xuandong Li Abstract—Graphical User Interface (GUI) testing is widely used to test mobile apps. As mobile apps are frequently updated and need repeated testing, to reduce the test cost, their test cases are often coded as scripts to enable automated execution using test harnesses/tools. When those mobile apps evolve, many of the test scripts, however, may become broken due to changes made to the app GUIs. While it is desirable that the broken scripts get repaired, doing it manually can be preventively expensive if the number of tests need repairing is large. We propose in this paper a novel approach named METER to repairing broken GUI test scripts automatically when mobile apps evolve. METER leverages computer vision techniques to infer GUI changes between two versions of a mobile app and uses the inferred changes to guide the repair of GUI test scripts. Since METER only relies on screenshots to repair GUI tests, it is applicable to apps targeting open or closed source mobile platforms. In experiments conducted on 22 Android apps and 6 iOS apps, repairs produced by METER helped preserve 63.7% and 38.8% of all the test actions broken by the GUI changes, respectively.
    [Show full text]
  • A Zero Kernel Operating System: Rethinking Microkernel Design by Leveraging Tagged Architectures and Memory-Safe Languages
    A Zero Kernel Operating System: Rethinking Microkernel Design by Leveraging Tagged Architectures and Memory-Safe Languages by Justin Restivo B.S., Massachusetts Institute of Technology (2019) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Masters of Engineering in Computer Science and Engineering at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY February 2020 © Massachusetts Institute of Technology 2020. All rights reserved. Author............................................................................ Department of Electrical Engineering and Computer Science January 29, 2020 Certified by....................................................................... Dr. Howard Shrobe Principal Research Scientist, MIT CSAIL Thesis Supervisor Certified by....................................................................... Dr. Hamed Okhravi Senior Staff Member, MIT Lincoln Laboratory Thesis Supervisor Certified by....................................................................... Dr. Samuel Jero Technical Staff, MIT Lincoln Laboratory Thesis Supervisor Accepted by...................................................................... Katrina LaCurts Chair, Master of Engineering Thesis Committee DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. This material is based upon work supported by the Assistant Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions
    [Show full text]
  • Guidelines on Minimum Standards for Developer Verification of Software
    Guidelines on Minimum Standards for Developer Verification of Software Paul E. Black Barbara Guttman Vadim Okun Software and Systems Division Information Technology Laboratory July 2021 Abstract Executive Order (EO) 14028, Improving the Nation’s Cybersecurity, 12 May 2021, di- rects the National Institute of Standards and Technology (NIST) to recommend minimum standards for software testing within 60 days. This document describes eleven recommen- dations for software verification techniques as well as providing supplemental information about the techniques and references for further information. It recommends the following techniques: • Threat modeling to look for design-level security issues • Automated testing for consistency and to minimize human effort • Static code scanning to look for top bugs • Heuristic tools to look for possible hardcoded secrets • Use of built-in checks and protections • “Black box” test cases • Code-based structural test cases • Historical test cases • Fuzzing • Web app scanners, if applicable • Address included code (libraries, packages, services) The document does not address the totality of software verification, but instead recom- mends techniques that are broadly applicable and form the minimum standards. The document was developed by NIST in consultation with the National Security Agency. Additionally, we received input from numerous outside organizations through papers sub- mitted to a NIST workshop on the Executive Order held in early June, 2021 and discussion at the workshop as well as follow up with several of the submitters. Keywords software assurance; verification; testing; static analysis; fuzzing; code review; software security. Disclaimer Any mention of commercial products or reference to commercial organizations is for infor- mation only; it does not imply recommendation or endorsement by NIST, nor is it intended to imply that the products mentioned are necessarily the best available for the purpose.
    [Show full text]
  • The Impact of Software Evolution on Code Coverage Information
    University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Computer Science and Engineering, Department CSE Conference and Workshop Papers of 2001 The Impact of Software Evolution on Code Coverage Information Sebastian Elbaum University of Nebraska-Lincoln, [email protected] David Gable University of Nebraska-Lincoln, [email protected] Gregg Rothermel University of Nebraska-Lincoln, [email protected] Follow this and additional works at: https://digitalcommons.unl.edu/cseconfwork Part of the Computer Sciences Commons Elbaum, Sebastian; Gable, David; and Rothermel, Gregg, "The Impact of Software Evolution on Code Coverage Information" (2001). CSE Conference and Workshop Papers. 132. https://digitalcommons.unl.edu/cseconfwork/132 This Article is brought to you for free and open access by the Computer Science and Engineering, Department of at DigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in CSE Conference and Workshop Papers by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln. Proceedings. IEEE International Conference on Software Maintenance, 2001. Digital Object Identifier: 10.1109/ICSM.2001.972727 Publication Year: 2001 , Page(s): 170 - 179 The Impact of Software Evolution on Code Coverage Information Sebastian Elbaum David Gable Gregg Rothermel Dept. of Computer Dept. of Computer Computer Science Dept. Science and Engineering Science and Engineering Oregon State University University of Nebraska-Lincoln University of Nebraska-Lincoln Corvallis, OR Lincoln, Nebraska Lincoln, Nebraska [email protected] [email protected] [email protected] Abstract Often, code coverage information is collected for a ver- sion of a program to aid in some maintenance or testing task Many tools and techniques for addressing software performed on that particular version.
    [Show full text]
  • General Commands Reference Guide C
    General Commands Reference Guide C TRACE32 Online Help TRACE32 Directory TRACE32 Index TRACE32 Documents ...................................................................................................................... General Commands ...................................................................................................................... General Commands Reference Guide C .................................................................................. 1 History ...................................................................................................................................... 11 CACHE ...................................................................................................................................... 12 CACHE View and modify CPU cache contents 12 CACHE.CLEAN Clean CACHE 12 CACHE.ComPare Compare CACHE with memory 13 CACHE.DUMP Dump CACHE 14 CACHE.FLUSH Clean and invalidate CACHE 15 CACHE.GET Get CACHE contents 16 CACHE.INFO View all information related to an address 16 CACHE.INVALIDATE Invalidate CACHE 17 CACHE.List List CACHE contents 17 CACHE.ListFunc List cached functions 18 CACHE.ListLine List cached source code lines 19 CACHE.ListModule List cached modules 19 CACHE.ListVar List cached variables 20 CACHE.LOAD Load previously stored cache contents 21 CACHE.RELOAD Reload previously loaded cache contents 21 CACHE.SAVE Save cache contents for postprocessing 21 CACHE.SNAPSHOT Take cache snapshot for comparison 22 CACHE.UNLOAD Unload previously loaded cache contents 23 CACHE.view
    [Show full text]
  • Test Script Language
    CHAPTER 11 TEST SCRIPT LANGUAGE We will now look at the TSL language. You have already been exposed to this language at various points of this book. All the recorded scripts that WinRunner creates when you perform an operation is in TSL code syntax. Keep in mind that while mastery of TSL is not required for creating automated tests, knowledge of the language helps to enhance the recorded tests and to create highly sophisticated tests. Skillful usage of TSL can limit the need for manual intervention when running your test. It can also make your test less error‐prone. Test Script Language TSL is the script language used by WinRunner for recording and executing scripts. In this chapter, I will provide you with the foundational concepts of TSL. Think of these as the building blocks of TSL scripting that will help you learn how to write the language. Another useful resource to review while learning TSL is the WinRunner help system. TIP: To access the WinRunner help system at any time, press the F1 key. The TSL language is very compact containing only a small number of operators and keywords. If you are proficient in any programming language, you will find TSL easy to learn. In fact, you will find it much easier than learning a programming language because TSL is a script language and as such, does not have many of the complex syntax structures you may find in programming languages. On the other hand, TSL as a script language has considerably less features and capabilities than a programming language.
    [Show full text]
  • An Introduction to Test Driven Development Using Perl
    An Introduction to Test Driven Development Using Perl Grant McLean, Catalyst IT Limited <[email protected]> September 2008 This article describes the practise of Test Driven Development and illustrates it with a worked example. The sample tests, source code and tools are written in Perl however the concepts should be applicable to any programming language. Introducing Test Driven Development Few people would dispute that testing is a critical part of any software development project. It is unfortunate however that testing is often left until the end. This may be due to a mistaken belief that tests are useless until you have something to test. Test Driven Development (TDD) positions testing in a critical role at the centre of development activities. Development models based around TDD favour a cyclic flow with a series of iterations building one upon another to deliver functionality in stages. Each iteration follows a flow similar to that illustrated in figure 1. Figure 1. An Idealised Test Driven Development cycle The process begins with requirements gathering – deciding what will be delivered and defining success criteria (how we'll know when we've delivered it). The requirements definition feeds into the development of a test plan. Development of the program code can proceed once tests have been written to specify correct behaviour for the code. When tests have been written to describe all the requirements and code has been written to pass all the tests, the development is complete and the system can be deployed. The process then begins again with the definition of the requirements for the next set of deliverables.
    [Show full text]
  • Non-Intrusive Structural Coverage for Objective Caml
    Electronic Notes in Theoretical Computer Science 264 (4) (2011) 59–73 www.elsevier.com/locate/entcs Non-Intrusive Structural Coverage for Objective Caml Philippe Wang1, Adrien Jonquet2, Emmanuel Chailloux3 Equipe´ APR Laboratoire d’Informatique de Paris 6 (CNRS UMR 7606) Universit´e Pierre et Marie Curie (Paris 6) 4 place Jussieu, 75005 Paris, France Abstract This paper presents a non-intrusive method for Objective Caml code coverage analysis. While classic methods rewrite the source code to an instrumented version that will produce traces at runtime, our approach chooses not to rewrite the source code. Instead, we use a virtual machine to monitor instructions execution and produce traces. These low-level traces are used to create a machine code coverage report. Combined with control-flow debug information, they can be analyzed to produce a source code coverage report. The purpose of this approach is to make available a method to generate code coverage analysis with the same binary for testing and for production. Our customized virtual machine respects the same semantics as the original virtual machine; one of its original aspects is that it is implemented in the Objective Caml, the very language we build the tool for. This work is part of the Coverage project, which aims to develop open source tools for safety-critical embedded applications and their code generators. Keywords: Certification Tools Design, Code Coverage, Objective Caml Virtual Machine 1 Introduction One of the most demanding development process for safety-critical software was defined a couple of decades ago by the civil avionics authorities as the DO-178B standard [17].
    [Show full text]