USER's GUIDE Version 6.4 New Editions of This Guide Incorporate All Material Added Or Changed Since the Previous Edition
Total Page:16
File Type:pdf, Size:1020Kb

Load more
Recommended publications
-
Opportunities and Open Problems for Static and Dynamic Program Analysis Mark Harman∗, Peter O’Hearn∗ ∗Facebook London and University College London, UK
1 From Start-ups to Scale-ups: Opportunities and Open Problems for Static and Dynamic Program Analysis Mark Harman∗, Peter O’Hearn∗ ∗Facebook London and University College London, UK Abstract—This paper1 describes some of the challenges and research questions that target the most productive intersection opportunities when deploying static and dynamic analysis at we have yet witnessed: that between exciting, intellectually scale, drawing on the authors’ experience with the Infer and challenging science, and real-world deployment impact. Sapienz Technologies at Facebook, each of which started life as a research-led start-up that was subsequently deployed at scale, Many industrialists have perhaps tended to regard it unlikely impacting billions of people worldwide. that much academic work will prove relevant to their most The paper identifies open problems that have yet to receive pressing industrial concerns. On the other hand, it is not significant attention from the scientific community, yet which uncommon for academic and scientific researchers to believe have potential for profound real world impact, formulating these that most of the problems faced by industrialists are either as research questions that, we believe, are ripe for exploration and that would make excellent topics for research projects. boring, tedious or scientifically uninteresting. This sociological phenomenon has led to a great deal of miscommunication between the academic and industrial sectors. I. INTRODUCTION We hope that we can make a small contribution by focusing on the intersection of challenging and interesting scientific How do we transition research on static and dynamic problems with pressing industrial deployment needs. Our aim analysis techniques from the testing and verification research is to move the debate beyond relatively unhelpful observations communities to industrial practice? Many have asked this we have typically encountered in, for example, conference question, and others related to it. -
Addresssanitizer + Code Coverage Kostya Serebryany, Google Eurollvm 2014 New and Shiny -Fprofile-Instr-Generate
AddressSanitizer + Code Coverage Kostya Serebryany, Google EuroLLVM 2014 New and shiny -fprofile-instr-generate ● Coming this year ● Fast BB-level code coverage ● Increment a counter per every (*) BB ○ Possible contention on counters ● Creates special non-code sections ○ Counters ○ Function names, line numbers Meanwhile: ASanCoverage ● Tiny prototype-ish thing: ○ Part of AddressSanitizer ○ 30 lines in LLVM, 100 in run-time ● Function- or BB- level coverage ○ Booleans only, not counters ○ No contention ○ No extra sections in the binary At compile time: if (!*BB_Guard) { __sanitizer_cov(); *BB_Guard = 1; } At run time void __sanitizer_cov() { Record(GET_CALLER_PC()); } At exit time ● For every binary/DSO in the process: ○ Dump observed PCs in a separate file as 4-byte offsets At analysis time ● Compare/Merge using 20 lines of python ● Symbolize using regular DWARF % cat cov.c int main() { } % clang -g -fsanitize=address -mllvm -asan-coverage=1 cov. c % ASAN_OPTIONS=coverage=1 ./a.out % wc -c *sancov 4 a.out.15751.sancov % sancov.py print a.out.15751.sancov sancov.py: read 1 PCs from a.out.15751.sancov sancov.py: 1 files merged; 1 PCs total 0x4850b7 % sancov.py print *.sancov | llvm-symbolizer --obj=a.out main /tmp/cov.c:1:0 Fuzzing with coverage feedback ● Test corpus: N random tests ● Randomly mutate random test ○ If new BB is covered -- add this test to the corpus ● Many new bugs in well fuzzed projects! Feedback from our customers ● Speed is paramount ● Binary size is important ○ Permanent & temporary storage, tmps, I/O ○ Stripping non-code -
A Framework for Evaluating Performance of Software Testing Tools
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 9, ISSUE 02, FEBRUARY 2020 ISSN 2277-8616 A Framework For Evaluating Performance Of Software Testing Tools Pramod Mathew Jacob, Prasanna Mani Abstract: Software plays a pivotal role in this technology era. Due to its increased applicable domains, quality of the software being developed is to be monitored and controlled. Software organization follows many testing methodologies to perform quality management. Testing methodologies can be either manual or automated. Automated testing tools got massive acceptability among software professionals due to its enhanced features and functionalities than that of manual testing. There are hundreds of test automation tools available, among which some perform exceptionally well. Due to the availability of large set of tools, it is a herculean task for the project manager to choose the appropriate automation tool, which is suitable for their project domain. In this paper, we derive a software testing tool selection model which evaluates the performance aspects of a script-based testing tool. Experimental evaluation proves that, it can be used to compare and evaluate various performance characteristics of commercially accepted test automation tools based on user experience as well as system performance. Index Terms: Automated testing, Software testing, Test script, Testing Tool, Test bed, Verification and Validation. —————————— ◆ —————————— 1 INTRODUCTION S OFTWARE is advanced in recent days by enhancing its applicable domains. Software is embedded in almost all electronic gadgets and systems. In this scenario the quality of the software plays a significant role. The customer or end – user should be satisfied which is primarily depended on quality and capability of the software being developed. -
Automated Testing of Firmware Installation and Update Scenarios for Peripheral Devices
DEGREE PROJECT IN COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2019 Automated testing of firmware installation and update scenarios for peripheral devices DAG REUTERSKIÖLD KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Automated testing of firmware installation and update scenarios for peripheral devices DAG REUTERSKIÖLD Master in Computer Science Date: August 12, 2019 Supervisor: Hamid Faragardi Examiner: Elena Troubitsyna School of Electrical Engineering and Computer Science Host company: Tobii AB Swedish title: Automatisering av enhetsinstallation, uppdatering och testning med hjälp av virtuella maskiner iii Abstract This research presents an approach to transition from manual to automated testing of hardware specific firmware. The manual approach for firmware test- ing can be repetitive and time consuming. A significant proportion of the time is spent on cleaning and re-installing operating systems so that old firmware does not interfere with the newer firmware that is being tested. The approach in this research utilizes virtual machines and presents an automation framework. One component of the automation framework is an application to imitate con- nected peripheral devices to bypass hardware dependencies of firmware in- stallers. The framework also consists of automation and pipeline scripts with the objective to execute firmware installers and detect errors and abnormalities in the installation and updating processes. The framework can run on locally hosted virtual machines, but is most applicable using cloud hosted virtual ma- chines, where it is part of a continuous integration that builds, downloads, installs, updates and tests new firmware versions, in a completely automated manner. The framework is evaluated by measuring and comparing execution times with manually conducted installation and updating tests, and the result shows that the framework complete tests much faster than the manual approach. -
GUI-Guided Test Script Repair for Mobile Apps
This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication. The final version of record is available at http://dx.doi.org/10.1109/TSE.2020.3007664 1 GUI-Guided Test Script Repair for Mobile Apps Minxue Pan, Tongtong Xu, Yu Pei, Zhong Li, Tian Zhang, Xuandong Li Abstract—Graphical User Interface (GUI) testing is widely used to test mobile apps. As mobile apps are frequently updated and need repeated testing, to reduce the test cost, their test cases are often coded as scripts to enable automated execution using test harnesses/tools. When those mobile apps evolve, many of the test scripts, however, may become broken due to changes made to the app GUIs. While it is desirable that the broken scripts get repaired, doing it manually can be preventively expensive if the number of tests need repairing is large. We propose in this paper a novel approach named METER to repairing broken GUI test scripts automatically when mobile apps evolve. METER leverages computer vision techniques to infer GUI changes between two versions of a mobile app and uses the inferred changes to guide the repair of GUI test scripts. Since METER only relies on screenshots to repair GUI tests, it is applicable to apps targeting open or closed source mobile platforms. In experiments conducted on 22 Android apps and 6 iOS apps, repairs produced by METER helped preserve 63.7% and 38.8% of all the test actions broken by the GUI changes, respectively. -
A Zero Kernel Operating System: Rethinking Microkernel Design by Leveraging Tagged Architectures and Memory-Safe Languages
A Zero Kernel Operating System: Rethinking Microkernel Design by Leveraging Tagged Architectures and Memory-Safe Languages by Justin Restivo B.S., Massachusetts Institute of Technology (2019) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Masters of Engineering in Computer Science and Engineering at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY February 2020 © Massachusetts Institute of Technology 2020. All rights reserved. Author............................................................................ Department of Electrical Engineering and Computer Science January 29, 2020 Certified by....................................................................... Dr. Howard Shrobe Principal Research Scientist, MIT CSAIL Thesis Supervisor Certified by....................................................................... Dr. Hamed Okhravi Senior Staff Member, MIT Lincoln Laboratory Thesis Supervisor Certified by....................................................................... Dr. Samuel Jero Technical Staff, MIT Lincoln Laboratory Thesis Supervisor Accepted by...................................................................... Katrina LaCurts Chair, Master of Engineering Thesis Committee DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. This material is based upon work supported by the Assistant Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions -
Guidelines on Minimum Standards for Developer Verification of Software
Guidelines on Minimum Standards for Developer Verification of Software Paul E. Black Barbara Guttman Vadim Okun Software and Systems Division Information Technology Laboratory July 2021 Abstract Executive Order (EO) 14028, Improving the Nation’s Cybersecurity, 12 May 2021, di- rects the National Institute of Standards and Technology (NIST) to recommend minimum standards for software testing within 60 days. This document describes eleven recommen- dations for software verification techniques as well as providing supplemental information about the techniques and references for further information. It recommends the following techniques: • Threat modeling to look for design-level security issues • Automated testing for consistency and to minimize human effort • Static code scanning to look for top bugs • Heuristic tools to look for possible hardcoded secrets • Use of built-in checks and protections • “Black box” test cases • Code-based structural test cases • Historical test cases • Fuzzing • Web app scanners, if applicable • Address included code (libraries, packages, services) The document does not address the totality of software verification, but instead recom- mends techniques that are broadly applicable and form the minimum standards. The document was developed by NIST in consultation with the National Security Agency. Additionally, we received input from numerous outside organizations through papers sub- mitted to a NIST workshop on the Executive Order held in early June, 2021 and discussion at the workshop as well as follow up with several of the submitters. Keywords software assurance; verification; testing; static analysis; fuzzing; code review; software security. Disclaimer Any mention of commercial products or reference to commercial organizations is for infor- mation only; it does not imply recommendation or endorsement by NIST, nor is it intended to imply that the products mentioned are necessarily the best available for the purpose. -
The Impact of Software Evolution on Code Coverage Information
University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Computer Science and Engineering, Department CSE Conference and Workshop Papers of 2001 The Impact of Software Evolution on Code Coverage Information Sebastian Elbaum University of Nebraska-Lincoln, [email protected] David Gable University of Nebraska-Lincoln, [email protected] Gregg Rothermel University of Nebraska-Lincoln, [email protected] Follow this and additional works at: https://digitalcommons.unl.edu/cseconfwork Part of the Computer Sciences Commons Elbaum, Sebastian; Gable, David; and Rothermel, Gregg, "The Impact of Software Evolution on Code Coverage Information" (2001). CSE Conference and Workshop Papers. 132. https://digitalcommons.unl.edu/cseconfwork/132 This Article is brought to you for free and open access by the Computer Science and Engineering, Department of at DigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in CSE Conference and Workshop Papers by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln. Proceedings. IEEE International Conference on Software Maintenance, 2001. Digital Object Identifier: 10.1109/ICSM.2001.972727 Publication Year: 2001 , Page(s): 170 - 179 The Impact of Software Evolution on Code Coverage Information Sebastian Elbaum David Gable Gregg Rothermel Dept. of Computer Dept. of Computer Computer Science Dept. Science and Engineering Science and Engineering Oregon State University University of Nebraska-Lincoln University of Nebraska-Lincoln Corvallis, OR Lincoln, Nebraska Lincoln, Nebraska [email protected] [email protected] [email protected] Abstract Often, code coverage information is collected for a ver- sion of a program to aid in some maintenance or testing task Many tools and techniques for addressing software performed on that particular version. -
General Commands Reference Guide C
General Commands Reference Guide C TRACE32 Online Help TRACE32 Directory TRACE32 Index TRACE32 Documents ...................................................................................................................... General Commands ...................................................................................................................... General Commands Reference Guide C .................................................................................. 1 History ...................................................................................................................................... 11 CACHE ...................................................................................................................................... 12 CACHE View and modify CPU cache contents 12 CACHE.CLEAN Clean CACHE 12 CACHE.ComPare Compare CACHE with memory 13 CACHE.DUMP Dump CACHE 14 CACHE.FLUSH Clean and invalidate CACHE 15 CACHE.GET Get CACHE contents 16 CACHE.INFO View all information related to an address 16 CACHE.INVALIDATE Invalidate CACHE 17 CACHE.List List CACHE contents 17 CACHE.ListFunc List cached functions 18 CACHE.ListLine List cached source code lines 19 CACHE.ListModule List cached modules 19 CACHE.ListVar List cached variables 20 CACHE.LOAD Load previously stored cache contents 21 CACHE.RELOAD Reload previously loaded cache contents 21 CACHE.SAVE Save cache contents for postprocessing 21 CACHE.SNAPSHOT Take cache snapshot for comparison 22 CACHE.UNLOAD Unload previously loaded cache contents 23 CACHE.view -
Test Script Language
CHAPTER 11 TEST SCRIPT LANGUAGE We will now look at the TSL language. You have already been exposed to this language at various points of this book. All the recorded scripts that WinRunner creates when you perform an operation is in TSL code syntax. Keep in mind that while mastery of TSL is not required for creating automated tests, knowledge of the language helps to enhance the recorded tests and to create highly sophisticated tests. Skillful usage of TSL can limit the need for manual intervention when running your test. It can also make your test less error‐prone. Test Script Language TSL is the script language used by WinRunner for recording and executing scripts. In this chapter, I will provide you with the foundational concepts of TSL. Think of these as the building blocks of TSL scripting that will help you learn how to write the language. Another useful resource to review while learning TSL is the WinRunner help system. TIP: To access the WinRunner help system at any time, press the F1 key. The TSL language is very compact containing only a small number of operators and keywords. If you are proficient in any programming language, you will find TSL easy to learn. In fact, you will find it much easier than learning a programming language because TSL is a script language and as such, does not have many of the complex syntax structures you may find in programming languages. On the other hand, TSL as a script language has considerably less features and capabilities than a programming language. -
An Introduction to Test Driven Development Using Perl
An Introduction to Test Driven Development Using Perl Grant McLean, Catalyst IT Limited <[email protected]> September 2008 This article describes the practise of Test Driven Development and illustrates it with a worked example. The sample tests, source code and tools are written in Perl however the concepts should be applicable to any programming language. Introducing Test Driven Development Few people would dispute that testing is a critical part of any software development project. It is unfortunate however that testing is often left until the end. This may be due to a mistaken belief that tests are useless until you have something to test. Test Driven Development (TDD) positions testing in a critical role at the centre of development activities. Development models based around TDD favour a cyclic flow with a series of iterations building one upon another to deliver functionality in stages. Each iteration follows a flow similar to that illustrated in figure 1. Figure 1. An Idealised Test Driven Development cycle The process begins with requirements gathering – deciding what will be delivered and defining success criteria (how we'll know when we've delivered it). The requirements definition feeds into the development of a test plan. Development of the program code can proceed once tests have been written to specify correct behaviour for the code. When tests have been written to describe all the requirements and code has been written to pass all the tests, the development is complete and the system can be deployed. The process then begins again with the definition of the requirements for the next set of deliverables. -
Non-Intrusive Structural Coverage for Objective Caml
Electronic Notes in Theoretical Computer Science 264 (4) (2011) 59–73 www.elsevier.com/locate/entcs Non-Intrusive Structural Coverage for Objective Caml Philippe Wang1, Adrien Jonquet2, Emmanuel Chailloux3 Equipe´ APR Laboratoire d’Informatique de Paris 6 (CNRS UMR 7606) Universit´e Pierre et Marie Curie (Paris 6) 4 place Jussieu, 75005 Paris, France Abstract This paper presents a non-intrusive method for Objective Caml code coverage analysis. While classic methods rewrite the source code to an instrumented version that will produce traces at runtime, our approach chooses not to rewrite the source code. Instead, we use a virtual machine to monitor instructions execution and produce traces. These low-level traces are used to create a machine code coverage report. Combined with control-flow debug information, they can be analyzed to produce a source code coverage report. The purpose of this approach is to make available a method to generate code coverage analysis with the same binary for testing and for production. Our customized virtual machine respects the same semantics as the original virtual machine; one of its original aspects is that it is implemented in the Objective Caml, the very language we build the tool for. This work is part of the Coverage project, which aims to develop open source tools for safety-critical embedded applications and their code generators. Keywords: Certification Tools Design, Code Coverage, Objective Caml Virtual Machine 1 Introduction One of the most demanding development process for safety-critical software was defined a couple of decades ago by the civil avionics authorities as the DO-178B standard [17].