Automatic Removal of Unused Code for Software Testing

Total Page:16

File Type:pdf, Size:1020Kb

Automatic Removal of Unused Code for Software Testing Automatic Removal of Unused Code for Software Testing Jing Lin BSc Computer Science Submission date: 19th January 2015 Supervisor: Jens Krinke This report is submitted as part requirement for the BSc Degree in Com- puter Science at UCL. It is substantially the result of my own work except where explicitly indicated in the text. The report may be freely copied and distributed provided the source is explicitly acknowledged. Acknowledgement This project is completed with the great help of many people and I would like to give my most sincere thanks to all of them. I would like to express my sincere gratitude and thanks to my supervisor Jens Krinke to spend his time helping me understand and giving me sugges- tions to carry out this project. I would also take this chance to thank my family for their continuous support throughout my studies, my friends for their encouragement and my home university for providing the Erasmus programme to make this happen. Abstract The traditional testing system will report for the whole system when this fails in the test cases. However, this report could be large and specially if one needs to fix the errors of the system. This project aims to solve this problem by removing unused code automatically so that the reduced source code is focused on single test case and it will be easier to analyse and faster to test. Contents 1 Introduction 6 1.1 Motivation . .6 1.2 Aim and goal . .6 1.3 Achievement process . .7 1.4 Report overview . .8 2 Background 9 2.1 Delta debugging . .9 2.1.1 Simplifying and isolating failure-inducing . .9 2.1.2 Locating causes of program failures . 12 2.2 Code coverage . 14 2.2.1 JaCoCo . 15 2.2.2 GCOV . 15 2.3 Delta tool . 16 2.4 Project approach vs. Dead code elimination . 17 3 Requirements and analysis 19 3.1 Problem statement . 19 3.2 Requirements . 19 3.3 Use cases . 20 3.4 Code coverage analysis . 20 3.4.1 LCOV . 20 3.4.2 MELD . 21 1 4 Design and implementation 22 4.1 Delta Coverage tool design . 22 4.2 Structure of Delta Coverage . 22 4.3 Implementation of Delta Coverage . 24 4.3.1 Setup.sh . 24 4.3.2 Test.sh . 28 4.3.3 Run.sh . 29 5 Testing and result evaluation 31 5.1 Testing strategy . 31 5.2 Specific test cases . 32 5.3 Result evaluation based on different C programs . 32 5.3.1 Tower of Hanoi Algorithm . 32 5.3.2 Sudoku Algorithm . 38 5.3.3 Travelling Salesman Problem . 42 6 Conclusions 45 6.1 Summary . 45 6.2 Critical evaluation . 45 6.3 Future work . 46 Appendices 47 Appendix A Evaluation data and results 47 A.1 Tower of Hanoi . 47 A.1.1 Test Cases . 47 A.2 Sudoku . 48 A.2.1 Test Cases . 48 A.3 Travelling Salesman Problem . 48 A.3.1 Test Cases . 48 Appendix B Project Plan 51 Appendix C Code Listing 62 C.1 Delta Coverage tool implementation source code . 62 2 C.1.1 setup.sh . 62 C.1.2 test.sh . 63 C.1.3 run.sh . 63 C.2 The addition function for Sudoku program . 64 3 List of Figures 2.1 Minimizing Delta Debugging algorithm[1]. 11 2.2 Delta Debugging algorithm[1] (Where C3means there are not test case changes). 11 2.3 Search in space[2]. 13 2.4 Search in time: cause transition[2]. 14 2.5 Dead code[9]. 17 2.6 Dead code elimination[9]. 18 4.1 Delta Coverage architecture design. 23 4.2 Delta Coverage workflow. 24 4.3 Implementation Structure : setup.sh .............. 25 4.4 The output messages of GCOV. 26 4.5 A program with coverage information: toh.c.gcov ....... 27 4.6 Implementation Structure: test.sh ................ 29 4.7 Implementation Structure: run.sh ................ 29 5.1 Tower of Hanoi: test1.txt. 33 5.2 Tower of Hanoi: test2.txt. 33 5.3 Coverage estimation: toh test1:info ............... 34 5.4 Coverage estimation : toh test2:info .............. 34 5.5 Differences between removed code and original code from code coverage report toh test1:info.................. 35 5.6 Differences between removed code and original code from code coverage report toh test2:info.................. 36 5.7 Differences between removed code and original code from code coverage report toh test2:info.................. 36 4 5.8 Differences between the reduced source codes: minmial test1:txt toh:c (left hand) and minmial test2:txt toh:c (right hand). 38 5.9 An example of Wikipedia Sudoku: test1:txt........... 39 5.10 World's hardest Sudoku: test2:txt................ 39 5.11 Different coverage estimations with test1:txt (the first row) and test2:txt (the second row). 40 5.12 Differences between removed code with the original source code. 41 5.13 Differences between reduced source codes. 42 5.14 Different coverage estimations with test1:txt(the first row), test2:txt (the second row) and test3:txt (the third row). 43 5.15 Differences between removed code with the original source code. 43 5.16 Differences between reduced source codes for the three test cases: test1:txt(left), test2:txt(centre), test3:txt(right). 44 5 Chapter 1 Introduction 1.1 Motivation During the testing of a system, it is possible to check which codes have been executed by measuring the code coverage information. There are tools that help to find out problems inside the system if a test fails. Notwithstanding, most of these tools are static analysis based, and report for the complete source code. The approach of this project intends to improve the process of code execution analysis and return a reduced source code instead of the complete one. The reduced source code can be used for much more effective fault local- isation during a test. Our proposal not only reduces the program size but also makes the testing process much easier and faster than the whole source code. The reduced system will provide the minimal source code that still produces the same results as the complete system. 1.2 Aim and goal The aim of this project is to develop a tool called Delta Coverage which can automatically remove unused code1 for software testing. An approach 1The unused code is the code that never has been executed during the testing. 6 called Delta Debugging, developed by Andreas Zeller[1][2], can be used for this improvement. This approach can remove parts of the source code and checks whether the reduced source code still produces the same results as the original one. For a better measurement, I focus on Delta Debugging and code coverage. And the goal of this project consists of development of Delta Coverage for C language programs. 1.3 Achievement process To achieve all the aims and goals of this project, my contributions are the following: 1. Studying Delta Debugging's approach[1][2]. 2. Studying about code coverage: • JaCoCo2 is a code cover tool for Java program. • GCOV is a code cover tool for C program and only works on GNU Compiler Collection (GCC). 3. Studying about Delta[3], this tool uses Delta Debugging algorithm in GNU system. 4. Implementation of Delta Coverage tool with the following features: • Use of GCOV to cover a program code. • Use of Delta tool to remove as many unused code as possible. 5. Evaluation and analysis of different C programs with Delta Coverage tool. 2Java Code Coverage 7 1.4 Report overview This project is organized in the following chapters: Chapter 1 deals with an overview of the project, its aims, goals and achievement. Chapter 2 details the information about Delta Debugging and its re- lated works. It also explains, briefly, different test coverage programs such as JaCoCo and GCOV. In addition, it introduces an overview of Delta tool system information. And finally, it compares the proposed approach with dead code3 elimination. Chapter 3 describes thoroughly the problem statement, requirements, use cases and analysis for this approach. Chapter 4 introduces the design of Delta Coverage tool. Also, it ex- plains the structure and process of Delta Coverage tool implementation. Chapter 5 describes the evaluation and analysis of the results in differ- ent test cases using Delta Coverage tool. Chapter 6 details the conclusion of this project and possible future de- velopments. 3http://en.wikipedia.org/wiki/Dead_code 8 Chapter 2 Background 2.1 Delta debugging Delta Debugging is an algorithm that is based on hypothesis to find out bugs. It is one of the methods that can automate the scientific method of debugging. And also, this algorithm simplifies and prevents program bugs. In 1997, while Andreas Zeller was finishing his PhD thesis “Configuration Management with Version Sets1" at the same time he and his student started to work on the visual debugger to visualise the structures data in execution time. Then, they developed the Delta Debugging algorithm which could make debugging programs easier. 2.1.1 Simplifying and isolating failure-inducing The research of Holger Cleve and Andreas Zeller[1] explains the utilities of Delta Debugging algorithm. They use it to simplify failing test case and isolate the difference between passing and failing test cases automatically. This algorithm classifies the outcomes of testing in three difference cases after running the algorithm that has to be debugged: 1. The result is passing (3). 2. The result is failing (7). 1https://www.st.cs.uni-saarland.de/publications/files/zeller-thesis-1997.pdf 9 3. The result is unsolved (?). Delta Debugging would have been useful for one real case in the past decade (Bugzilla, the Mozilla bug database).
Recommended publications
  • Using the GNU Compiler Collection (GCC)
    Using the GNU Compiler Collection (GCC) Using the GNU Compiler Collection by Richard M. Stallman and the GCC Developer Community Last updated 23 May 2004 for GCC 3.4.6 For GCC Version 3.4.6 Published by: GNU Press Website: www.gnupress.org a division of the General: [email protected] Free Software Foundation Orders: [email protected] 59 Temple Place Suite 330 Tel 617-542-5942 Boston, MA 02111-1307 USA Fax 617-542-2652 Last printed October 2003 for GCC 3.3.1. Printed copies are available for $45 each. Copyright c 1988, 1989, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with the Invariant Sections being \GNU General Public License" and \Funding Free Software", the Front-Cover texts being (a) (see below), and with the Back-Cover Texts being (b) (see below). A copy of the license is included in the section entitled \GNU Free Documentation License". (a) The FSF's Front-Cover Text is: A GNU Manual (b) The FSF's Back-Cover Text is: You have freedom to copy and modify this GNU Manual, like GNU software. Copies published by the Free Software Foundation raise funds for GNU development. i Short Contents Introduction ...................................... 1 1 Programming Languages Supported by GCC ............ 3 2 Language Standards Supported by GCC ............... 5 3 GCC Command Options .........................
    [Show full text]
  • Read the Thesis
    Making compiler tests based on compiler source code coverage Jelle Witsen Elias [email protected] January 19, 2021, 33 pages Academic supervisor: Clemens Grelck, [email protected] Daily supervisors: Jos´eLuis March Cabrelles, [email protected] Remi van Veen, [email protected] Host organisation: Solid Sands, www.solidsands.nl Universiteit van Amsterdam Faculteit der Natuurwetenschappen, Wiskunde en Informatica Master Software Engineering http://www.software-engineering-amsterdam.nl Abstract Compilers are important elements in software infrastructure. It is therefore important that these complex pieces of software are tested. We found a way of making C compiler tests that is different from the usual approach of doing this. With our new process we base new tests on their coverage of compiler source code, instead of constructing them by solely looking at the rules defined in language standards. This way, we manage to make tests with constructions that were overlooked when building our existing compiler test suites. We built a tool that takes source code of a C application as input, along with a C compiler test suite. The tool outputs pieces of code which, when compiled, cover parts of compiler source code that the test suite does not yet cover. With this output, we can construct new tests for inclusion in our existing test suite. Our process is as follows: we take an open source C application and compile its sources with an open source compiler. This compiler has been instrumented to enable source code coverage analysis. After compilation, we compare the coverage of compiler source code of the application with coverage of our existing test suites.
    [Show full text]
  • Assignment 10 — 18.11.2020 – V1.2 Dynamic Program Analysis
    SMA: Software Modeling and Analysis Prof. Dr. Oscar Nierstrasz A2020 Pascal Gadient, Pooja Rani Assignment 10 — 18.11.2020 – v1.2 Dynamic Program Analysis Please submit this exercise by email to [email protected] before 25 November 2020, 10:15am. You must submit your code as editable text, i.e., use plain text file(s). Exercise 1: Theory (1 pt) a) Suppose you have an application and you want to test all code paths with dynamic analysis. Is this approach reasonable? Justify! b) Is monkey testing a dynamic analysis technique? Justify! Exercise 2: Contracts (3 pts) In this exercise, we use the syntax and features from jContracts available here. Consider the Java code below. /** * @pre number >= 100.0 * @post return == true * @inv number < 0 || number >= 0 */ public boolean add(int number) f if (number < 10) f this.number = this.number + number; return true; g else f return false; g g a) Are the contracts for add(int number) valid, i.e., exists a configuration that passes all three checks? If yes, which configuration succeeds? If no, what is the problem? b) Can the invariant contract be removed without any side effects? If yes, why do no side-effects occur? If no, which side-effects would occur? c) Suppose you have a Java method that calculates from a given number the number multiplied by itself as shown in the code below. Design a post contract that validates the result, i.e., the contract must be violated whenever a result is wrong. public float square(int number) f return number * number; g page 1 of 5 SMA: Software Modeling and Analysis Prof.
    [Show full text]
  • In Using the GNU Compiler Collection (GCC)
    Using the GNU Compiler Collection For gcc version 4.9.3 (GCC) Richard M. Stallman and the GCC Developer Community Published by: GNU Press Website: http://www.gnupress.org a division of the General: [email protected] Free Software Foundation Orders: [email protected] 51 Franklin Street, Fifth Floor Tel 617-542-5942 Boston, MA 02110-1301 USA Fax 617-542-2652 Last printed October 2003 for GCC 3.3.1. Printed copies are available for $45 each. Copyright c 1988-2015 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with the Invariant Sections being \Funding Free Software", the Front-Cover Texts being (a) (see below), and with the Back-Cover Texts being (b) (see below). A copy of the license is included in the section entitled \GNU Free Documentation License". (a) The FSF's Front-Cover Text is: A GNU Manual (b) The FSF's Back-Cover Text is: You have freedom to copy and modify this GNU Manual, like GNU software. Copies published by the Free Software Foundation raise funds for GNU development. i Short Contents Introduction ::::::::::::::::::::::::::::::::::::::::::::: 1 1 Programming Languages Supported by GCC ::::::::::::::: 3 2 Language Standards Supported by GCC :::::::::::::::::: 5 3 GCC Command Options ::::::::::::::::::::::::::::::: 9 4 C Implementation-defined behavior ::::::::::::::::::::: 343 5 C++ Implementation-defined behavior :::::::::::::::::: 351 6 Extensions
    [Show full text]
  • KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs
    KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs Cristian Cadar, Daniel Dunbar, Dawson Engler ∗ Stanford University Abstract bolic values and replace corresponding concrete program operations with ones that manipulate symbolic values. We present a new symbolic execution tool, KLEE, ca- When program execution branches based on a symbolic pable of automatically generating tests that achieve value, the system (conceptually) follows both branches, high coverage on a diverse set of complex and on each path maintaining a set of constraints called the environmentally-intensive programs. We used KLEE to path condition which must hold on execution of that thoroughly check all 89 stand-alone programs in the path. When a path terminates or hits a bug, a test case GNU COREUTILS utility suite, which form the core can be generated by solving the current path condition user-level environment installed on millions of Unix sys- for concrete values. Assuming deterministic code, feed- tems, and arguably are the single most heavily tested set ing this concrete input to a raw, unmodified version of of open-source programs in existence. KLEE-generated the checked code will make it follow the same path and tests achieve high line coverage — on average over 90% hit the same bug. per tool (median: over 94%) — and significantly beat Results are promising. However, while researchers the coverage of the developers’ own hand-written test have shown such tools can sometimes get good cover- suites. When we did the same for 75 equivalent tools in age and find bugs on a small number of programs, it the BUSYBOX embedded system suite, results were even has been an open question whether the approach has any better, including 100% coverage on 31 of them.
    [Show full text]
  • The Definitive Guide to GCC
    The Definitive Guide to GCC KURT WALL AND WILLIAM VON HAGEN APress Media, LLC The Definitive Guide to GCC Copyright ©2004 by Kurt Wall and William von Hagen Originally published by Apress in 2004 All rights reserved. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage or retrieval system, without the prior written permission of the copyright owner and the publisher. ISBN 978-1-59059-109-3 ISBN 978-1-4302-0704-7 (eBook) DOI 10.1007/978-1-4302-0704-7 Trademarked names may appear in this book. Rather than use a trademark symbol with every occurrence of a trademarked name, we use the names only in an editorial fashion and to the benefit of the trademark owner, with no intention of infringement of the trademark. Technical Reviewer: Gene Sally Editorial Board: Steve Anglin, Dan Appleman, Gary Cornell, James Cox, Tony Davis, John Franklin, Chris Mills, Steven Rycroft, Dominic Shakeshaft, Julian Skinner, Martin Streicher, Jim Sumser, Karen Watterson, Gavin Wray, John Zukowski Assistant Publisher: Grace Wong Project Manager: Sofia Marchant Copy Editor: Ami Knox Production Manager: Kari Brooks Production Editor: Janet Vaii Proofreader: Elizabeth Berry Compositor and Artist: Kinetic Publishing Services, llC Indexer: Valerie Perry Cover Designer: Kurt Krames Manufacturing Manager: Tom Debolski The information in this book is distributed on an "as is" hasis, without warranty. Although every precaution bas been taken in the preparation of this work, neither the author(s) nor Apress shall have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the information contained in this work.
    [Show full text]
  • Code Coverage: Free Software and Virtualization to the Rescue
    Code Coverage: Free Software and Virtualization to the Rescue Franco Gasperoni, AdaCore [email protected] What is Code Coverage and Why Is It Useful? Your team is developing or updating an embedded application based on a set of requirements. If you do not intend to conduct a testing campaign to verify compliance of the code with the requirements don’t read any further. Code coverage will not be of much use. If you are still reading you probably plan to create or upgrade your collection of tests and ensure that what has been developed adheres to the requirements. Whether your testing campaign relies on unit/functional/robustness/… tests or a mixture of these you will probably wonder about the quality of your test suite and more specifically how to ascertain that the test suite is reasonably complete. Code coverage allows you to answer this question. As you run the test suite on your application, a code coverage tool tells you which portions of the application are really being exerted by the testing campaign. Let’s get down to earth and work trough an example. Let’s assume that we have been asked to sort an array of floats (our requirement). To meet this requirement we implement the insertion sort routine given in Figure 1. 01: type Float_Array is array (Integer range <>) of Float; 02: 03: procedure Insertion_Sort (A : in out Float_Array) is 04: Value : Float; 05: I : Integer; 06: begin 07: for W in A'First .. A'Last loop 08: Value := A (W); 09: I := W; 10: while I > A'First and then A (I - 1) > Value loop 11: A (I) := A (I - 1); 12: I := I - 1; 13: end loop; 14: if I < W then 15: A (I) := Value; 16: end if; 17: end loop; 18: end Insertion_Sort; Figure 1: A simple insertion sort algorithm.
    [Show full text]
  • Improving Test Coverage of GNU Coreutils
    MASARYK UNIVERSITY FACULTY}w¡¢£¤¥¦§¨ OF INFORMATICS !"#$%&'()+,-./012345<yA| Improving test coverage of GNU coreutils BACHELOR THESIS Andrej Antaš Brno, 2012 Declaration Hereby I declare, that this paper is my original authorial work, which I have worked out by my own. All sources, references and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source. Andrej Antaš Advisor: RNDr. Petr Roˇckai iii Acknowledgement I would like to thank my supervisor and OndˇrejVašík, who had the patience to walk me through the process of creating this work. My thanks also belongs to my beloved family, friends and my loving girl- friend, who all stood by my side in need and never let me down. iv Abstract The aim of this bachelor degree thesis is to improve test coverage on the GNU coreutils project in Fedora Linux distribution. The output of my work is a patch file with changes done on tests and reported Red Hat bugzilla describing problems discovered by new altered tests, which also improved the robustness of the coreutils source code and brought the level of test coverage of these utilities near the level of the upstream coverage. v Keywords coreutils, testing, coverage, gcov, fedora, patch, multi-byte vi Contents 1 Introduction ............................1 2 GNU Core Utilities ........................4 2.1 Brief history .........................4 2.2 GNU File Utilities ......................4 2.3 GNU Shell utilities .....................5 2.4 GNU Text utilities ......................5 3 Testing ...............................6 3.1 Black box testing ......................7 3.2 White box testing ......................8 3.3 Coreutils test suite .....................9 4 Linux ...............................
    [Show full text]
  • Improving the Linux Test Project with Kernel Code Coverage Analysis
    Improving the Linux Test Project with Kernel Code Coverage Analysis Paul Larson IBM Linux Technology Center Nigel Hinds, Rajan Ravindran, Hubertus Franke IBM Thomas J. Watson Research Center {plars, nhinds, rajancr, frankeh}@us.ibm.com Abstract Test Project test suite matures, its developers constantly look for ways to improve its useful- Coverage analysis measures how much of the ness. Some of the considerations currently be- target code is run during a test and is a useful ing addressed are what areas of the Linux ker- mechanism for evaluating the effectiveness of a nel test development efforts should focus on, system test or benchmark. In order to improve and proving that tests added to the LTP are the quality of the Linux® kernel, we are uti- testing more than what was there before. In lizing GCOV, a test coverage program which is order to gather data to support decisions for part of GNU CC, to show how much of the ker- these questions, analysis should be performed nel code is being exercised by test suites such to show what areas of the kernel are being exe- as the Linux Test Project. This paper will dis- cuted by any given test. cuss the issues that make code coverage analy- The design of a process for doing this analysis sis a complicated task and how those issues are took the following features into consideration: being addressed. We will describe tools that have been developed to facilitate analysis and how these tools can be utilized for kernel, as 1. Use existing tools as much as possible well as application code coverage analysis.
    [Show full text]
  • Quality of HAL Libraries T.O.M.A.S Team Goal of This Part 3
    Quality of HAL libraries T.O.M.A.S Team Goal of this part 3 Demonstrate basic information about code analysis techniques and its tools Present ST practices used for HAL and LL libraries quality verification Program analysis 4 Program analysis is the process of analyzing the behavior of software programs to ensure quality attributes such as: • correctness, • reliability • robustness, • safety, Program analysis can be performed: • during code review • without executing the program (static program analysis), • during runtime (dynamic program analysis), • combination of all above. Static program analysis 5 Static program analysis is the analysis of computer software that is performed without executing a program. Two popular static program analysis methods: • Control-flow: obtaining information about which functions can be called at various points during the execution of a program. • Data-flow: gathering information about the values at each point of the program and how they change over time. Dynamic program analysis 6 Dynamic program analysis is analysis performed on executing program. Popular dynamic program analysis methods: • Testing: executing the program with a Valid/invalid System under Outputs given input and evaluating its behavior inputs and the produced output. test • Monitoring: records and logs different kinds of information about the program such as resource usage, events and interaction • Program slicing: reducing the program to the minimum form that still produces the selected behavior. Program analysis tools 7 • Free software released under the terms of the GNU General Public License • Splint - tool for statically checking C programs for security vulnerabilities and coding mistakes. • Gcov and Gprof - source code coverage analysis and statement-by-statement profiling tools.
    [Show full text]
  • Tools: Gcov, Cscope, Ctags, and Makefiles
    Outline Gcov Cscope Ctags Makefiles Data and File Structures Laboratory Tools: Gcov, Cscope, Ctags, and Makefiles Malay Bhattacharyya Assistant Professor Machine Intelligence Unit Indian Statistical Institute, Kolkata August, 2018 Malay Bhattacharyya Data and File Structures Laboratory Outline Gcov Cscope Ctags Makefiles 1 Gcov 2 Cscope 3 Ctags 4 Makefiles Malay Bhattacharyya Data and File Structures Laboratory Outline Gcov Cscope Ctags Makefiles Gcov Gcov stands for GNU Coverage Testing Tool It is a source code coverage analysis and statement-by-statement profiling tool. It explores the frequency of execution for each line of a source code during a run. It can find the portions of a source code which are not used (or not exercised in testing). It (in combination with Gprof) can speed up a program by concentrating on specific lines of the source code. Gcov currently supports the following languages: Fortran, C. Note: Lcov is a graphical front-end for Gcov. Malay Bhattacharyya Data and File Structures Laboratory Outline Gcov Cscope Ctags Makefiles Managing the source code Note: A lot more is there from GNU. Malay Bhattacharyya Data and File Structures Laboratory Outline Gcov Cscope Ctags Makefiles Managing water { A little fun!!! Malay Bhattacharyya Data and File Structures Laboratory Outline Gcov Cscope Ctags Makefiles Running a program with Gcov Compile a program to allow the compiler to prepare an instrumented executable (with coverage details) $gcc -Wall -fprofile-arcs -ftest-coverage -o progx progx.c (progx.gcno is created) Execute the program $./progx (progx.gcda is created) Run the program with Gcov $gcov progx.c (progx.c.gcov is created) 1 Gcov produces an annotated version of the original source file containing the frequency of executions at different levels.
    [Show full text]
  • GCC Optimizations Introduction
    gcc #gcc Table of Contents About 1 Chapter 1: Getting started with gcc 2 Remarks 2 Versions 2 Examples 5 "Hello world!" with common command line options 5 Determine gcc version 7 Chapter 2: Code coverage: gcov 8 Remarks 8 Examples 8 Introduction 8 Compilation 8 Generate Output 8 Chapter 3: GCC Optimizations 10 Introduction 10 Examples 10 Differnce between codes compiled with O0 and O3 10 Chapter 4: GNU C Extensions 12 Introduction 12 Examples 12 Attribute packed 12 Chapter 5: Warnings 13 Syntax 13 Parameters 13 Remarks 13 Examples 13 Enable nearly all warnings 13 C source file 13 C++ source file 13 Credits 14 About You can share this PDF with anyone you feel could benefit from it, downloaded the latest version from: gcc It is an unofficial and free gcc ebook created for educational purposes. All the content is extracted from Stack Overflow Documentation, which is written by many hardworking individuals at Stack Overflow. It is neither affiliated with Stack Overflow nor official gcc. The content is released under Creative Commons BY-SA, and the list of contributors to each chapter are provided in the credits section at the end of this book. Images may be copyright of their respective owners unless otherwise specified. All trademarks and registered trademarks are the property of their respective company owners. Use the content presented in this book at your own risk; it is not guaranteed to be correct nor accurate, please send your feedback and corrections to [email protected] https://riptutorial.com/ 1 Chapter 1: Getting started with gcc Remarks GCC (upper case) refers to the GNU Compiler Collection.
    [Show full text]