Developing DO- 178B/C Compliant Software for Airborne Systems
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
A System Model for Managing Requirement Traceability Matrices Via Statistical Artifact Change Analysis Benjamin J
A System Model for Managing Requirement Traceability Matrices via Statistical Artifact Change Analysis Benjamin J. Deaver and LiGuo Huang, Southern Methodist University, Dallas Introduction and Motivation Requirement Traceability Matrix – Gantt Open Source Software Project The Value of the Requirements Traceability Matrix The system Requirement Traceability Matrix (RTM) is primarily used for ensuring Our initial dataset for evaluation is taken from the Gantt Open Source PROCEDURE Kannenberg et al identify the underlying necessity of the Requirements Traceability Matrix and the underlying effect on that all requirements are fulfilled by the system artifact deliverables and the Software Project (http://www.ganttproject.biz). The initial trace data 1. Identify the taxonomy of change for a given domain (Systems Engineering, project management, process visibility, verification and validation, as well as project maintainability. Over time, the management of change to deliverables with respect to impact on other systems. In has been provided to us by Dr. Alexander Egyed at the Institute for SoS Engineering, Software Engineering). the systems engineering and system of systems (SoS) engineering landscapes, the Systems Engineering and Automation at Johannes Kepler University. Requirements Traceability Matrix provides significant insight in to the internal workings of the relationships between RTM is a tool that is useful at time of creation, but requires constant maintenance in Additional traces of requirements to code for subsequent Gantt versions 2. Identify and classify changes between static versions of the product. requirements and deliverable artifacts. a frequently changing landscape to maintain the original level of validity. The are being created using similar methods to the original collections 3. Generate Requirements Trace Matrixes for each static version of the product dynamic nature of systems and SoS engineering landscapes requires that a RTM be performed by Dr. -
Parasoft Dottest REDUCE the RISK of .NET DEVELOPMENT
Parasoft dotTEST REDUCE THE RISK OF .NET DEVELOPMENT TRY IT https://software.parasoft.com/dottest Complement your existing Visual Studio tools with deep static INCREASE analysis and advanced PROGRAMMING EFFICIENCY: coverage. An automated, non-invasive solution that the related code, and distributed to his or her scans the application codebase to iden- IDE with direct links to the problematic code • Identify runtime bugs without tify issues before they become produc- and a description of how to fix it. executing your software tion problems, Parasoft dotTEST inte- grates into the Parasoft portfolio, helping When you send the results of dotTEST’s stat- • Automate unit and component you achieve compliance in safety-critical ic analysis, coverage, and test traceability testing for instant verification and industries. into Parasoft’s reporting and analytics plat- regression testing form (DTP), they integrate with results from Parasoft dotTEST automates a broad Parasoft Jtest and Parasoft C/C++test, allow- • Automate code analysis for range of software quality practices, in- ing you to test your entire codebase and mit- compliance cluding static code analysis, unit testing, igate risks. code review, and coverage analysis, en- abling organizations to reduce risks and boost efficiency. Tests can be run directly from Visual Stu- dio or as part of an automated process. To promote rapid remediation, each problem detected is prioritized based on configur- able severity assignments, automatical- ly assigned to the developer who wrote It snaps right into Visual Studio as though it were part of the product and it greatly reduces errors by enforcing all your favorite rules. We have stuck to the MS Guidelines and we had to do almost no work at all to have dotTEST automate our code analysis and generate the grunt work part of the unit tests so that we could focus our attention on real test-driven development. -
HALO: Post-Link Heap-Layout Optimisation
HALO: Post-Link Heap-Layout Optimisation Joe Savage Timothy M. Jones University of Cambridge, UK University of Cambridge, UK [email protected] [email protected] Abstract 1 Introduction Today, general-purpose memory allocators dominate the As the gap between memory and processor speeds continues landscape of dynamic memory management. While these so- to widen, efficient cache utilisation is more important than lutions can provide reasonably good behaviour across a wide ever. While compilers have long employed techniques like range of workloads, it is an unfortunate reality that their basic-block reordering, loop fission and tiling, and intelligent behaviour for any particular workload can be highly subop- register allocation to improve the cache behaviour of pro- timal. By catering primarily to average and worst-case usage grams, the layout of dynamically allocated memory remains patterns, these allocators deny programs the advantages of largely beyond the reach of static tools. domain-specific optimisations, and thus may inadvertently Today, when a C++ program calls new, or a C program place data in a manner that hinders performance, generating malloc, its request is satisfied by a general-purpose allocator unnecessary cache misses and load stalls. with no intimate knowledge of what the program does or To help alleviate these issues, we propose HALO: a post- how its data objects are used. Allocations are made through link profile-guided optimisation tool that can improve the fixed, lifeless interfaces, and fulfilled by -
Architectural Support for Scripting Languages
Architectural Support for Scripting Languages By Dibakar Gope A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Electrical and Computer Engineering) at the UNIVERSITY OF WISCONSIN–MADISON 2017 Date of final oral examination: 6/7/2017 The dissertation is approved by the following members of the Final Oral Committee: Mikko H. Lipasti, Professor, Electrical and Computer Engineering Gurindar S. Sohi, Professor, Computer Sciences Parameswaran Ramanathan, Professor, Electrical and Computer Engineering Jing Li, Assistant Professor, Electrical and Computer Engineering Aws Albarghouthi, Assistant Professor, Computer Sciences © Copyright by Dibakar Gope 2017 All Rights Reserved i This thesis is dedicated to my parents, Monoranjan Gope and Sati Gope. ii acknowledgments First and foremost, I would like to thank my parents, Sri Monoranjan Gope, and Smt. Sati Gope for their unwavering support and encouragement throughout my doctoral studies which I believe to be the single most important contribution towards achieving my goal of receiving a Ph.D. Second, I would like to express my deepest gratitude to my advisor Prof. Mikko Lipasti for his mentorship and continuous support throughout the course of my graduate studies. I am extremely grateful to him for guiding me with such dedication and consideration and never failing to pay attention to any details of my work. His insights, encouragement, and overall optimism have been instrumental in organizing my otherwise vague ideas into some meaningful contributions in this thesis. This thesis would never have been accomplished without his technical and editorial advice. I find myself fortunate to have met and had the opportunity to work with such an all-around nice person in addition to being a great professor. -
Requirements Traceability Practices Guide
CDC UNIFIED PROCESS PRACTICE GUIDE REQUIREMENTS TRACEABILITY Document Purpose The purpose of this document is to provide guidance on the project management practice of Requirements Traceability and to describe the practice overview, requirements, best practices, activities, and key terms related to these requirements. In addition, templates relevant to this practice are provided at the end of this guide. Practice Overview Requirements traceability is an activity that is one part of an overarching requirements management practice and extends from requirements definition through to implementation. Requirements tracing is used to ensure that each step of the product’s development process is correct, conforms to the needs of prior and next steps, and conforms to the defined requirements. Requirements tracing is a technique that helps to ensure that the project delivers what stakeholders expect. If applied correctly requirements tracing is a practice that can greatly increase the quality and reliability of a project’s final product while minimizing costly rework resulting from requirements errors. Requirements tracing defines the ability to describe and follow the life of a requirement, in both a forward and backward direction, ideally through each step of the entire product’s life cycle. This is done by documenting and tracking traceability relationships between requirements. A traceability relationship is a dependency relationship between project and/or product elements. Similar to the way a dynamic project schedule may react to a change in one task, a change to a requirement element may also have a rippling effect on other elements. A well documented traceability relationship clearly defines requirement dependencies and allows for analysis of how changes in requirements affect other requirements and the project as a whole. -
Non-Functional Requirements
® IBM Software Group Non-Functional Requirements Peter Eeles [email protected] IBM Software Group | Rational software Agenda Definitions Types of requirement Classifying requirements Capturing NFRs Summary IBM Software Group | Rational software Definitions Functional Requirement Functional requirements describe the behaviors (functions or services) of the system that support user goals, tasks or activities. [Malan] Non-Functional Requirement Non-functional requirements include constraints and qualities. [Malan] [System] qualities are properties or characteristics of the system that its stakeholders care about and hence will affect their degree of satisfaction with the system. [Malan] A constraint is a restriction on the degree of freedom we have in providing a solution. [Leffingwell] [Leffingwell] Managing Software Requirements – a Unified Approach, Dean Leffingwell and Don Widrig. [Malan] Defining Non-Functional Requirements, Ruth Malan and Dana Bredemeyer. IBM Software Group | Rational software Agenda Definitions Types of requirement Classifying requirements Capturing NFRs Summary IBM Software Group | Rational software Types of Requirement Use Cases Defines the behavior of the system from an external perspective System-Wide Requirements Legal and regulatory requirements, application standards, qualities that the system exhibits (such as usability, reliability, scalability, performance), operating system and environment requirements, compatibility requirements, and other design and implementation constraints Change -
Runtime Detection of C-Style Errors in UPC Code
Runtime Detection of C-Style Errors in UPC Code Peter Pirkelbauer Chunhua Liao Thomas Panas Lawrence Livermore National Lawrence Livermore National Microsoft Laboratory Laboratory Parallel Data Warehousing [email protected] [email protected] [email protected] Dan Quinlan Lawrence Livermore National Laboratory [email protected] Abstract Software frequently contains flaws, and the size and complexity Unified Parallel C (UPC) extends the C programming language of modern software makes the detection of such flaws difficult. Re- (ISO C 99) with explicit parallel programming support for the parti- ducing the number of software flaws is an effort that permeates tioned global address space (PGAS), which provides a global mem- all stages in software development. For example, adopting rigid ory space with localized partitions to each thread. Like its ancestor coding standards and restricting programming languages to a safe C, UPC is a low-level language that emphasizes code efficiency. subset can eliminate error-prone coding techniques. Static and dy- The absence of dynamic (and static) safety checks allows program- namic code analysis tools that operate on source or binary code mer oversights and software flaws that can be hard to spot. are also used to reduce software flaws. Static analysis tools uti- In this paper, we present an extension of a dynamic analysis lize source code (style) analysis or formal analysis techniques such tool, ROSE-Code Instrumentation and Runtime Monitor (ROSE- as dataflow analysis, abstract interpretation, and model checking. CIRM), for UPC to help programmers find C-style errors involving Static analysis tools do not influence the running programs but of- the global address space. -
Wind River® Vxworks® 7 Third Party License Notices
Wind River® VxWorks® 7 Third Party License Notices This document contains third party intellectual property (IP) notices for the BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY Wind River® VxWorks® 7 distribution. Certain licenses and license notices THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, may appear in other parts of the product distribution in accordance with the OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN license requirements. ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Trademarks All company, product and service names used in this software are for ACPICA identification purposes only. Version: 20170303 Component(s): Runtime Wind River and VxWorks are registered trademarks of Wind River Systems. Description: Provides code to implement ACPI specification in VxWorks. UNIX is a registered trademark of The Open Group. IBM and Bluemix are registered trademarks of the IBM Corporation. NOTICES: All other third-party trademarks are the property of their respective owners. 1. Copyright Notice Some or all of this work - Copyright (c) 1999 - 2016, Intel Corp. All rights reserved. Third Party Notices 2. License 2.1. This is your license from Intel Corp. under its intellectual property rights. You may have additional license terms from the party that provided you this software, covering your right to use that party's intellectual property rights. 64-Bit Dynamic Linker Version: 2.2. Intel grants, free of charge, to any person ("Licensee") obtaining a copy Component(s): Runtime of the source code appearing in this file ("Covered Code") an irrevocable, Description: The dynamic linker is used to load shared libraries. -
Accelerate Software Innovation Through Continuous Quality
Accelerate Software Innovation Through Continuous Quality 1 Software quality is recognized as the #1 issue IT executives are trying to mitigate. Enterprise organizations strive to accelerate the delivery of a compelling user experience to their customers in order to drive revenue. Software quality is recognized as the #1 issue IT executives are trying to mitigate. QA teams know they have issues and are actively looking for solutions to save time, increase quality, improve security, and more. The most notable difficulties are in identifying the right areas to test, the availability of flexible and reliable test environments and test data, and the realization of benefits from automation. You may be facing many challenges with delivering software to meet the high expectations for quality, cost, and schedule driven by the business. An effective software testing strategy can address these issues. If you’re looking to improve your software quality while achieving your business goals, Parasoft can help. With over 30 years of making testing easier for our customers, we have the innovation you need and the experience you trust. Our extensive continuous quality suite spans every testing need and enables you to reach new heights. 3 QUALITY-FIRST APPROACH You can’t test quality into an application at the end of the software development life cycle (SDLC). You need to ensure that your software development process and practices put a priority on quality- driven development and integrate a comprehensive testing strategy to verify that the application’s functionality meets the requirements. Shift testing left to the start of your development process to bring quality to the forefront. -
Requirements Traceability Matrix Template
Connecticut Department of Social Services Enterprise Program Management Office Requirements Traceability Matrix Purpose of the Requirements Traceability Matrix The Requirements Traceability Matrix (RTM) is a document that links requirements throughout the validation process. The purpose of the Requirements Traceability Matrix is to ensure that all requirements defined for a system are tested in the test protocols. The traceability matrix is a tool both for the validation team to ensure that requirements are not lost during the validation project and for auditors to review the validation documentation. Information to Include The requirements traceability matrix is usually developed in concurrence with the initial list of requirements (either the User Requirements Specification or Functional Requirements Specification). As the Design Specifications and Test Protocols are developed, the traceability matrix is updated to include the updated documents. Ideally, requirements should be traced to the specific test step in the testing protocol in which they are tested. How the RTM is used A requirements traceability matrix is used to check if the current project requirements are being met or to help in the creation of a request for proposal, software requirements specification, various deliverable documents, and project plan tasks. Change Impact Analysis – if a requirement is changing, trace links inform about related and dependent artifacts. These artifacts can easily be verified and if required, be adjusted. The probability to overlook related artifacts is reduced. Coverage Analysis – Traceability ensures that no requirements are overlooked. Especially when certifying safety-critical products it is necessary to demonstrate that all requirements are realized. Project Status Analysis – Tracking of the project status is possible: analyzing the traceability data allows the project manager to see the completion status of the requirements. -
Matrox Imaging Library (MIL) 10.0 MIL 10 Processing Pack 3 Release Notes (C) Copyright Matrox Electronic Systems Ltd., 1992-2018
------------------------------------------------------------------------------- Matrox Imaging Library (MIL) 10.0 MIL 10 Processing Pack 3 Release Notes (c) Copyright Matrox Electronic Systems Ltd., 1992-2018. ------------------------------------------------------------------------------- Main table of contents Section 1 : Differences between MIL 10 Processing Pack 3 and MIL 10 Processing Pack 2 with Update 63 Section 2 : Differences between MIL 10 Processing Pack 2 with Update 63 and MIL 10 Processing Pack 2 Section 3 : Differences between MIL 10 Processing Pack 2 and MIL 10 Processing Pack 1 Section 4 : Differences between MIL 10 Processing Pack 1 and MIL 10 Section 5 : Differences between MIL 10 and MIL 9 Processing Pack 2 with Update 56 Section 6 : Differences between MIL 9 Processing Pack 2 with Update 56 and MIL 9 Processing Pack 2 Section 7 : Differences between MIL 9 Processing Pack 2 with Update 45 and MIL 9 Processing Pack 2 Section 8 : Differences between MIL 9 Processing Pack 2 and MIL 9 Processing Pack 1 Section 9 : Differences between MIL 9 Processing Pack 1 and MIL 9 Section 10 : Differences between MIL 9 and MIL 8 Processing Pack 4 ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- Section 1: Differences between MIL 10 Processing Pack 3 and MIL 10 Processing Pack 2 with with Update 63 Table of Contents for Section 1 1. Overview 2. New functionalities and improvements 2.01 MIL processing specific examples 2.02 Classification module 2.03 Blob module 2.04 Calibration module 2.05 SureDotOCR® module 2.06 Registration module 2.07 Pattern Matching module 2.08 Model Finder module 2.09 Metrology module 2.10 3dMap module 2.11 Code Reader module 2.12 Measurement module 2.13 String Reader module 2.14 Color module 2.15 Primitives 2.16 Graphics 2.17 General 3. -
DETECTING BUFFER OVERFLOWS USING TESTCASE SYNTHESIS and CODE INSTRUMENTATION by MICHAEL A
DETECTING BUFFER OVERFLOWS USING TESTCASE SYNTHESIS AND CODE INSTRUMENTATION by MICHAEL A. ZHIVICH S.B., Electrical Engineering and Computer Science (2004) Massachusetts Institute of Technology SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ENGINEERING IN ELECTRICAL ENGINEERING AND COMPUTER SCIENCE at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 2005 c Massachusetts Institute of Technology 2005. All rights reserved. Author............................................................................ Department of Electrical Engineering and Computer Science May 19, 2005 Certified by........................................................................ Richard Lippmann Senior Scientist, MIT Lincoln Laboratory Thesis Supervisor Accepted by....................................................................... Arthur C. Smith Chairman, Department Committee on Graduate Students 2 Detecting Buffer Overflows Using Testcase Synthesis and Code Instrumentation by Michael A. Zhivich Submitted to the Department of Electrical Engineering and Computer Science on May 19, 2005, in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science Abstract The research presented in this thesis aims to improve existing approaches to dynamic buffer overflow detection by developing a system that utilizes code instrumentation and adap- tive test case synthesis to find buffer overflows and corresponding failure-inducing