Assurance Via Model Transformations and Their Hierarchical Refinement

Total Page:16

File Type:pdf, Size:1020Kb

Assurance Via Model Transformations and Their Hierarchical Refinement Assurance via model transformations and their hierarchical refinement Zinovy Diskin, Tom Maibaum, Alan Wassyng, Stephen Wynn-Williams, Mark Lawford McMaster University, McMaster Centre for Software Certification, Hamilton, Canada {diskinz,maibaum,wassyng,wynnwisj,lawford}@mcmaster.ca ABSTRACT systems for complex systems, we can say that assurance is about Assurance is a demonstration that a complex system (such as a car property-of-properties of system-of-systems. or a communication network) possesses an importantproperty, such A standard (and perhaps the only) practical engineering way as safety or security, with a high level of confidence. In contrast to to manage complexity is decomposition of the problem into sub- currently dominant approaches to building assurance cases, which problems; these subproblems are then themselves decomposed and are focused on goal structuring and/or logical inference, we propose so on, until a set of “atomic” problems whose solution is known considering assurance as a model transformation (MT) enterprise: is reached. The solution to the subproblems is then combined in saying that a system possesses an assured property amounts to a pre-determined way to solve the original problem. Different re- saying that a particular assurance view of the system comprising alizations of this idea for different contexts and in different terms the assurance data, satisfies acceptance criteria posed as assurance are abundant in engineering, science, mathematics, and everyday constraints. While the MT realizing this view is very complex, we life. Not surprisingly, the decomposition idea is heavily employed show that it can be decomposed into elementary MTs via a hierar- for building assurance cases (ACs) — documents aimed at demon- chy of refinement steps. The transformations at the bottom level are strating S j= Passr, which are written by the manufacturer of S and 1 ordinary MTs that can be executed for data specifying the system, assessed by certifying bodies . Building ACs based on decompo- thus providing the assurance data to be checked against the assur- sition is now supported by several notations and tools, primarily ance constraints. In this way, assurance amounts to traversing the Goal-Structuring Notation (GSN) and Claims-Arguments-Evidence hierarchy from the top to the bottom and assuring the correctness notation CAE. These methods and tools [1, 29] are becoming a de of each MT in the path. Our approach has a precise mathematical facto standard in safety assurance. foundation (rooted in process algebra and category theory) — a ne- The combination of two ideas — delegating the assurance argu- cessity if we are to model precisely and then analyze our assurance ment to the manufacturer and the decomposition approach outlined cases. We discuss the practical applicability of the approach, and above — gave rise to the growing popularity of ACs, which have argue that it has several advantages over existing approaches. lately emerged as a widely-used technique for assurance justifi- cation and assessment (see, e.g., surveys [3, 25] on safety cases.). KEYWORDS While we do believe in the power of both ideas, we think that the way of leveraging decomposition for assurance in GSN and Assurance case, Model transformation, Block diagram, Decomposi- CAE diagrams is confusing for two reasons. First, users of these tion, Substitution notations typically intermix two decomposition hierarchies: func- ACM Reference Format: tional/goal decomposition and logical decomposition, i.e., inference Zinovy Diskin, Tom Maibaum, Alan Wassyng, Stephen Wynn-Williams, [5]. Second, data and dataflow, which we will show are crucially Mark Lawford. 2018. Assurance via model transformations and their hierar- important for assurance, are left implicit in GSN/CAE-diagrams. chical refinement . In Proceedings of ACM Models conference (MODELS’18). ACM, New York, NY, USA, Article 4, 11 pages. https://doi.org/10.475/123_4 While keeping dataflow implicit may (arguably) be acceptable for documenting design activities, it is definitely not acceptable in as- 1 INTRODUCTION surance and essentially diminishes the value of GSN/CAE-based assurance cases. We understand assurance to be a demonstration that a complex We propose another decomposition mechanism based on model system such as a car or a communication network possesses an transformations. The idea is based on three observations 1-3) de- important complex property such as safety or security with a suffi- scribed below. ciently high level of confidence. We then write S j= P , where S assr 1) We notice that saying S j= P means that data about the sys- stands for the system and P for the property; we say S satisfies assr assr tem relevant for assurance, D , satisfy some relevant constraints, P or P holds for S. A technically more accurate formulation assr assr assr C ; that is, we define S j= P to be the statement DS j= C , would say that S satisfies P with acceptably high confidence if assr assr assr assr assr where j= can be read as either satisfies or conforms to – a phrasing the system is used as intended. Exploiting the idiom of a system of often used in the context when properties are seen as constraints. Permission to make digital or hard copies of part or all of this work for personal or To simplify notation, below we will omit the superscript S if it is classroom use is granted without fee provided that copies are not made or distributed clear from the context. Importantly, data D are to be computed for profit or commercial advantage and that copies bear this notice and the full citation assr on the first page. Copyrights for third-party components of this work must be honored. from “raw” data about the system D0 (think of physical parameters For all other uses, contact the owner/author(s). of a car, or technical parameters of a network) rather than being MODELS’18, Oct 2018, Copenhagen, Denmark © 2018 Copyright held by the owner/author(s). ACM ISBN 123-4567-24-567/08/06. 1The latter can be an independent agency or a group of experts at the manufacturer https://doi.org/10.475/123_4 disjoint from the AC writers. MODELS’18, Oct 2018, Copenhagen, Denmark Z. Diskin, T. Maibaum, A. Wassyng, S. Wynn-Williams, M. Lawford 2 SystemData Assr. view definition Cassr Exe(|= ) procedure whose assurance is not problematic. Thus, assurance Fassr: M --> Massr can be viewed as establishing the correctness of a complex model System Assr. View (hierarchically Massr Assr.Data Metamodel Design decomposed) transformation via its hierarchical decomposition – hence, the title Assr. Data con- Fassr of the paper. We will refer to the framework outlined above as the forms Metamodel Model-Transformation Based Assurance (MTBA). dataItem Exe(F ) assr dataItem Conclusion Our plan for the paper is as follows. In the next section, we present an overall view of MTBA, and based on it, explain the Figure 1: MTBA Architecture of Assurance content of the technical part of the paper (Sect. 3, 4 and 5). Sect. 6 is a discussion of the possible practical applicability of MTBA. Sect. 7 is Related and Future work, and Sect. 8 concludes. 2 MTBA IN A NUTSHELL given immediately: Dassr = fassr¹D0º, where fassr refers to an assur- ance function that inputs the raw system data and returns assurance Many assurance techniques rely on requirement decomposition: the data. Thus, the top assurance claim S j= Passr is rewritten as a data high-level assurance requirements for the system are decomposed conformance statement fassr¹D0º j= Cassr. into the corresponding requirements for subsystems and further 2) Next we notice a principal distinction between a definition on until we reach the level of elementary components. (E.g, in and an execution of a function, which is important for assurance. safety assurance, these high-level requirements are called safety Function fassr can be seen as an execution of a model transforma- goals, in privacy protection – standardized NIST controls, and in exe tion (MT) definition Fassr for data D0, i.e., fassr¹D0º = Fassr¹D0º. security, the set of system level requirements is specified in the Also, MT are defined over metamodels and can even be specified Protection Profile—a document identifying security requirements as mappings Fassr: M0 ! Massr (see [9]), where M0 and Massr are for a class of security devices.) The decomposition is mainly based metamodels for, respectively, system data and assurance data. Thus on design patterns and supported by mathematical models so that assurance can be seen as a special view of the system, where Fassr the assurance argument can be close to formal, if the mathematical is the view definition and fassr¹D0º is the view execution. Below complexity is manageable, or is supplemented by testing and/or we will often call transformation Fassr an assurance view. Moreover, model checking and similar techniques otherwise. We will refer to as metamodels typically include constraints, we can include assur- this part as inferential assurance (IA). ance constraints Cassr in Massr and thus reformulate the assurance Yet, however successfully we manage to decompose each system problem S j= Passr as a typical MT problem: does the result of a level goal, the system goals must themselves be validated to ensure transformation satisfy some predefined constraints encoded in the that they are suitable (e.g., complete)—indeed, any formal procedure target metamodel? begins with assumptions taken for granted (cf. axioms of ancient The workflow described above is specified in Fig. 1 as a block Greeks). The only way to “prove” such assumptions is to rely on diagram, whose nodes, shaped as directed rectangles, refer to pro- observational or experimental techniques. Indeed, an assurance cesses/functions and rounded rectangles refer to (meta)data; as case must have all assumptions validated through experimental usual, data consist of a structure of data items that conforms to the evidence - no loose ends! Often, this experimental justification is metamodel (think of a data graph typed over the type graph so that validated by previous experience and expert opinion, but the use of the constraints are satisfied).
Recommended publications
  • Multifocal: a Strategic Bidirectional Transformation Language for XML Schemas
    Multifocal: A Strategic Bidirectional Transformation Language for XML Schemas Hugo Pacheco and Alcino Cunha HASLab / INESC TEC & Universidade do Minho, Braga, Portugal fhpacheco,[email protected] Abstract. Lenses are one of the most popular approaches to define bidirectional transformations between data models. However, writing a lens transformation typically implies describing the concrete steps that convert values in a source schema to values in a target schema. In con- trast, many XML-based languages allow writing structure-shy programs that manipulate only specific parts of XML documents without having to specify the behavior for the remaining structure. In this paper, we propose a structure-shy bidirectional two-level transformation language for XML Schemas, that describes generic type-level transformations over schema representations coupled with value-level bidirectional lenses for document migration. When applying these two-level programs to partic- ular schemas, we employ an existing algebraic rewrite system to optimize the automatically-generated lens transformations, and compile them into Haskell bidirectional executables. We discuss particular examples involv- ing the generic evolution of recursive XML Schemas, and compare their performance gains over non-optimized definitions. Keywords: coupled transformations, bidirectional transformations, two- level transformations, strategic programming, XML 1 Introduction Data transformations are often coupled [16], encompassing software transfor- mation scenarios that involve the modification of multiple artifacts such that changes to one of the artifacts induce the reconciliation of the remaining ones in order to maintain global consistency. A particularly interesting instance of this class are two-level transformations [18, 5], that concern the type-level trans- formation of schemas coupled with the value-level transformation of documents that conform to those schemas.
    [Show full text]
  • Refinement Session Types
    MENG INDIVIDUAL PROJECT IMPERIAL COLLEGE LONDON DEPARTMENT OF COMPUTING Refinement Session Types Supervisor: Prof. Nobuko Yoshida Author: Fangyi Zhou Second Marker: Dr. Iain Phillips 16th June 2019 Abstract We present an end-to-end framework to statically verify multiparty concurrent and distributed protocols with refinements, where refinements are in the form of logical constraints. We combine the theory of multiparty session types and refinement types and provide a type system approach for lightweight static verification. We formalise a variant of the l-calculus, extended with refinement types, and prove their type safety properties. Based on the formalisation, we implement a refinement type system extension for the F# language. We design a functional approach to generate APIs with refinement types from a multiparty protocol in F#. We generate handler-styled APIs, which statically guarantee the linear usage of channels. With our refinement type system extension, we can check whether the implementation is correct with respect to the refinements. We evaluate the expressiveness of our system using three case studies of refined protocols. Acknowledgements I would like to thank Prof. Nobuko Yoshida, Dr. Francisco Ferreira, Dr. Rumyana Neykova and Dr. Raymond Hu for their advice, support and help during the project. Without them I would not be able to complete this project. I would like to thank Prof. Paul Kelly, Prof. Philippa Gardner and Prof. Sophia Drossopoulou, whose courses enlightened me to explore the area of programming languages. Without them I would not be motivated to discover more in this area. I would like to thank Prof. Paul Kelly again, this time for being my personal tutor.
    [Show full text]
  • From Action System to Distributed Systems: the Refinement Approach
    Luigia Petre and Emil Sekerinski From Action System to Distributed Systems: The Refinement Approach Contents List of Figures ix List of Tables xi I This is What a Part Would Look Like 1 1 A Contract-Based Approach to Ensuring Component Inter- operability in Event-B 3 Linas Laibinis and Elena Troubitsyna 1.1 Introduction . 3 1.2 Background: Event-B . 5 1.2.1 Modelling and Refinement in Event-B . 5 1.2.2 Modelling Modular Systems in Event-B . 7 1.3 From Event-B Modelling to Contracts . 11 1.3.1 Contracts . 11 1.3.2 From a Module Interface to a Component Contract . 12 1.4 Example: an Auction System . 13 1.4.1 Initial Model . 14 1.5 Conclusions . 19 Bibliography 21 vii List of Figures 1.1 Event-B machine and context components . 5 1.2 Before-after predicates . 6 1.3 Module interface . 8 1.4 Component contract . 11 1.5 Interface Component . 17 1.6 The Seller class contract . 18 ix List of Tables xi Part I This is What a Part Would Look Like 1 Chapter 1 A Contract-Based Approach to Ensuring Component Interoperability in Event-B Linas Laibinis Abo˚ Akademi University, Turku, Finland Elena Troubitsyna Abo˚ Akademi University, Turku, Finland 1.1 Introduction :::::::::::::::::::::::::::::::::::::::::::::::::::::: 3 1.2 Background: Event-B :::::::::::::::::::::::::::::::::::::::::::: 5 1.2.1 Modelling and Refinement in Event-B :::::::::::::::::: 5 1.2.2 Modelling Modular Systems in Event-B :::::::::::::::: 7 1.3 From Event-B Modelling to Contracts ::::::::::::::::::::::::::: 11 1.3.1 Contracts :::::::::::::::::::::::::::::::::::::::::::::::: 11 1.3.2 From a Module Interface to a Component Contract :::: 12 1.4 Example: an Auction System :::::::::::::::::::::::::::::::::::: 13 1.4.1 Initial Model ::::::::::::::::::::::::::::::::::::::::::::: 14 1.5 Conclusions ::::::::::::::::::::::::::::::::::::::::::::::::::::::: 19 1.1 Introduction Ensuring component interoperability constitutes one of the main chal- lenges in the component-based development approach [10].
    [Show full text]
  • A Theory of Software Product Line Refinement
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Theoretical Computer Science 455 (2012) 2–30 Contents lists available at SciVerse ScienceDirect Theoretical Computer Science journal homepage: www.elsevier.com/locate/tcs A theory of software product line refinement Paulo Borba a,∗, Leopoldo Teixeira a, Rohit Gheyi b a Informatics Center, Federal University of Pernambuco, Recife, PE, Brazil b Department of Computing Systems, Federal University of Campina Grande, Campina Grande, PB, Brazil article info a b s t r a c t Keywords: To safely evolve a software product line, it is important to have a notion of product Software product lines line refinement that assures behavior preservation of the original product line products. Software evolution So in this article we present a language independent theory of product line refinement, Refinement Refactoring establishing refinement properties that justify stepwise and compositional product line evolution. Moreover, we instantiate our theory with the formalization of specific languages for typical product lines artifacts, and then introduce and prove soundness of a number of associated product line refinement transformation templates. These templates can be used to reason about specific product lines and as a basis to derive comprehensive product line refinement catalogues. ' 2012 Elsevier B.V. All rights reserved. 1. Introduction A software product line is a set of related software products that are generated from reusable assets. Products are related in the sense that they share common functionality. Assets correspond to components, classes, property files, and other artifacts that we compose or instantiate in different ways to specify or build the different products.
    [Show full text]
  • Design Engineering.Pdf
    CS 451 Software Engineering Winter 2009 Yuanfang Cai Room 104, University Crossings 215.895.0298 [email protected] 1 Drexel University Design within the Context of Software Engineering 2 Drexel University Software Design Between Requirement and Coding Including: Data Design Architectural Design Interface Design Component Design Need to be modeled, analyzed, and reviewed in industrial strength software. 3 Drexel University Translating the Analysis Model into the Design Model Component Design Interface Design Architecture Design Data/Class Design 4 Drexel University Design Process and Desgin Quality 5 Drexel University Design Engineering Software design is an iterative process through which requirements are translated into a “blueprint” for constructing software Abstraction Refinement 6 Drexel University Design Engineering A design must implement all of the explicit requirements contained in the analysis model, and it must accommodate all of the implicit requirements desired by the customer. A design must be a readable, understandable guide for those who generate code and those who test and subsequently support the software. The design should provide a complete picture of the software, addressing, the data, functional, and behavioral domains from an implementation perspective. 7 Drexel University Design Quality FURPS – Functionality, Usability, Reliability, Performance, and Supportability. Functionality – assessed by evaluating: the feature set capabilities of the program. Usability - assessed by considering: human factors, overall aesthetics, consistency, end-user documentation. 8 Drexel University Design Quality - Functionality, Usability, Reliability, Performance, and Supportability Reliability – is evaluated by measuring: the frequency and severity of failure, the accuracy, of output results, the mean-time-to-failure, the ability to recover from failure, the predictability of the program.
    [Show full text]
  • Developing Reliable Software Using Object-Oriented Formal Specification and Refinement [Extended Abstract Prepared 24 March 2003]
    Developing Reliable Software using Object-Oriented Formal Specification and Refinement [Extended abstract prepared 24 March 2003] Dr. David Crocker Escher Technologies Ltd., Mallard House, Hillside Road, Ash Vale, Aldershot GU12 5BJ, United Kingdom [email protected] http://www.eschertech.com Abstract. It is our view that reliability cannot be guaranteed in large, complex software systems un- less formal methods are used. The challenge is to bring formal methods up to date with modern ob- ject-oriented techniques and make its use as productive as traditional methods. We believe that such a challenge can be met and we have developed the Escher Tool to demonstrate this. This paper de- scribes some of the issues involved in marrying formal methods with an object oriented approach, de- sign decisions we took in developing a language for object-oriented specification and refinement, and our results in applying the tool to small and large projects. 1 The challenge of developing reliable large-scale software systems The availability of ever more powerful processors at low prices has prompted organizations to attempt the construction of large, complex software systems. The larger the software system, the less effective testing is as a means of ensuring the absence of faults. Formal methods have for many years offered a logical solution to the problem of software reliability but have generally come at a high cost, demanding develop- ers with considerable mathematical skill and costing many additional hours of developer time to assist with discharging proof obligations. Our goal in developing the Escher Tool was to bring formal methods to modern object-oriented and component-based approaches to software development while at the same time achieving developer pro- ductivity no worse than standard informal techniques.
    [Show full text]
  • Supporting by Light-Weight Specifications the Liskov
    From Types to Contracts: Supporting by Light-Weight Specifications the Liskov Substitution Principle∗ Wolfgang Schreiner Research Institute for Symbolic Computation (RISC) Johannes Kepler University Linz, Austria [email protected] February 24, 2010 Abstract In this paper we review the main theoretical elements of behavioral subtyping in object-oriented programming languages in a semi-formal style that should allow software developers to understand better in which situations the Liskov substitution principle (objects of subclasses may stand for objects of superclasses) is violated. We then shortly discuss the specification of class contracts in behavioral specification languages that allow to ensure that the substitution principle is preserved. Since many software developers may shy away form these languages because the learn- ing curve is esteemed as too steep, we propose a language of light-weight specifications that provides by a hierarchy of gradually more expressive specification forms a more lenient path towards the use of behavioral spec- ification languages. The specifications do not demand the use of predicate logic; by automatic checking certain violations of the substitution princi- ple may be detected. ∗Supported by the Austrian Academic Exchange Service (OAD)¨ under the contract HU 14/2009. 1 Contents 1 Introduction 3 2 Types and Subtypes 8 3 Shared Mutable Variables 11 4 Classes and Objects 15 5 Contracts 17 6 Specifying Contracts 22 7 Light-Weight Specifications 25 7.1 nosubtype | subtype ......................... 26 7.2 core contract ............................ 27 7.3 simple contract .......................... 30 7.4 extended contract ......................... 33 7.5 full contract ............................ 33 8 Conclusions 34 2 1 Introduction Subclasses and Subtypes One of the most important features of the type systems of statically typed object oriented languages is that subclasses are in- tended to denote subtypes [3] (which is not self-evident [5]): if a class S inherits from a class T , class S extends T { ..
    [Show full text]
  • A Refinement Based Method for Developing Distributed Protocols
    Open Archive Toulouse Archive Ouverte OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible This is an author’s version published in: http://oatao.univ-toulouse.fr/24886 Official URL DOI : https://doi.org/10.1109/HASE.2019.00023 To cite this version: Stankaitis, Paulius and Iliasov, Alexei and Ait Ameur, Yamine and Kobayashi, Tsutomou and Ishikawa, Fuyuki and Romanowski, Alexander A Refinement Based Method for Developing Distributed Protocols. (2019) In: 19th IEEE International Symposium on High Assurance Systems Engineering (HASE 2019), 3 January 2019 - 5 January 2019 (Hangzhou, China). Any correspondence concerning this service should be sent to the repository administrator: [email protected] A refinement based method for developing distributed protocols ∗ ∗ † ‡ Paulius Stankaitis , Alexei Iliasov , Yamine A¨ıt-Ameur , Tsutomu Kobayashi , ‡ ∗ Fuyuki Ishikawa and Alexander Romanovsky ∗Newcastle University, Newcastle upon Tyne, United Kingdom †INPT-ENSEEIHT, 2 Rue Charles Camichel, Toulouse, France ‡National Institute of Informatics, Tokyo, Japan Corresponding author: [email protected] Abstract—This paper presents a methodology for modelling also discuss the endured verification challenges and results and verification of high-assurance distributed protocols. In the from using additional verification techniques. paper we describe two main technical contributions needed for the development method: communication modelling patterns and Related work. There have been several studies which aimed a refinement strategy. The applicability of the proposed method is to develop a general way of modelling and verification of demonstrated by developing a new distributed resource allocation distributed protocols. But, not many methods were based protocol.
    [Show full text]
  • Checking Model Transformation Refinement
    Checking Model Transformation Refinement Fabian B¨uttner1, Marina Egea2, Esther Guerra3, and Juan de Lara3 1 Ecole´ des Mines de Nantes - INRIA (France) [email protected] 2 Atos Research & Innovation Dept., Madrid (Spain) [email protected] 3 Universidad Aut´onomade Madrid (Spain) fEsther.Guerra, [email protected] Abstract. Refinement is a central notion in computer science, meaning that some artefact S can be safely replaced by a refinement R, which preserves S's properties. Having available techniques and tools to check transformation refinement would enable (a) the reasoning on whether a transformation correctly implements some requirements, (b) whether a transformation implementation can be safely replaced by another one (e.g. when migrating from QVT-R to ATL), and (c) bring techniques from stepwise refinement for the engineering of model transformations. In this paper, we propose an automated methodology and tool sup- port to check transformation refinement. Our procedure admits hetero- geneous specification (e.g. PaMoMo, Tracts, OCL) and implementation languages (e.g. ATL, QVT), relying on their translation to OCL as a common representation formalism and on the use of model finding tools. 1 Introduction The raising complexity of languages, models and their associated transformations makes evident the need for engineering methods to develop model transforma- tions [12]. Model transformations are software artefacts and, as such, should be developed using sound engineering principles. However, in current practice, transformations are normally directly encoded in some transformation language, with no explicit account for their requirements. These are of utmost importance, as they express what the transformation has to do, and can be used as a basis to assert correctness of transformation implementations.
    [Show full text]
  • Scaling Step-Wise Refinement
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 30, NO. 6, JUNE 2004 355 Scaling Step-Wise Refinement Don Batory, Member, IEEE, Jacob Neal Sarvela, Student Member, IEEE, and Axel Rauschmayer, Student Member, IEEE Abstract—Step-wise refinement is a powerful paradigm for developing a complex program from a simple program by adding features incrementally. We present the AHEAD (Algebraic Hierarchical Equations for Application Design) model that shows how step-wise refinement scales to synthesize multiple programs and multiple noncode representations. AHEAD shows that software can have an elegant, hierarchical mathematical structure that is expressible as nested sets of equations. We review a tool set that supports AHEAD. As a demonstration of its viability, we have bootstrapped AHEAD tools from equational specifications, refining Java and non- Java artifacts automatically; a task that was accomplished only by ad hoc means previously. Index Terms—Specification, design notations and documentation, representation, design concepts, methodologies, data abstraction, extensible languages, program synthesis, feature-oriented programming, refinement. æ 1INTRODUCTION TEP-WISE refinement is a powerful paradigm for devel- too limited. Today’s systems are not individual programs, Soping a complex program from a simple program by but rather groups of different programs collaborating in incrementally adding details [17]. The program increments sophisticated ways. Client-server architectures are exam- that we consider in this paper are feature refinements— ples and so are tool suites, such as MS-Office. Further, modules that encapsulate individual features where a systems are not solely described by source code. Architects feature is a product characteristic that is used in distin- routinely use different knowledge representations (e.g., guishing programs within a family of related programs process models, UML models, makefiles, design docu- (e.g., a product line) [23].
    [Show full text]
  • Contracts-Refinement Proof System for Component-Based Embedded Systems
    Contracts-refinement proof system for component-based embedded systems Alessandro Cimattia, Stefano Tonettaa aFondazione Bruno Kessler, Trento, Italy Abstract Contract-based design is an emerging paradigm for the design of complex systems, where each component is associated with a contract, i.e., a clear description of the expected interaction of the component with its environment. Contracts specify the expected behavior of a component by defining the assumptions that must be satisfied by the environment and the guarantees satisfied by the component in response. The ultimate goal of contract-based design is to allow for compositional reasoning, stepwise refinement, and a principled reuse of components that are already pre-designed, or designed independently. In this paper, we present fully formal contract framework based on temporal logic1. The synchronous or asynchronous decomposition of a component into subcomponents is complemented with the corresponding refinement of its contracts. The framework exploits such decomposition to automatically generate a set of proof obligations. Once verified, the conditions allow concluding the correctness of the architecture. This means that that the components ensure the guarantee of the system and the system ensures the assumptions of the components. The framework can be instantiated with different temporal logics. The proof system reduces the correctness of contracts refine- ment to entailment of temporal logic formulas. The tool support relies on an expressive property specification language, conceived for the formalization of embedded system requirements, and on a verification engine based on automated SMT techniques. Keywords: contract-based design, temporal logics, embedded systems, OCRA 1. Introduction Embedded systems are continuously growing in complexity, and carry out critical functions.
    [Show full text]
  • Metadata and Digital Library Development
    Cataloging for the 21st Century ~ Course 4 Metadata and Digital Library Development Metadata and Digital Library Development Instructor Manual Prepared by David Ruddy Cornell University Library For The Library of Congress And the Association for Library Collections & Technical Services L ibrary of Congress Cataloger's Learning Workshop Washington, DC March 2008 (1) Instructor Manual -- (2) Trainee Manual ISBN 978-0-8444-1166-3 (Instructor Manual) ISBN 978-0-8444-1167-1 (Trainee Manual) Version 1a, March 2008 Metadata and Digital Library Development Table of Contents Foreword General Introduction for Trainers Content Slides Introduction: Background, Goals, and Course Outline Session 1: Introduction to Digital Library System Objectives, Functionality, and Metadata Session 2: Understanding Functional Requirements Session 3: Metadata and Functionality Session 4: Metadata Conversion: Enhancement and Mapping Session 5: Metadata Workflows Session 6: Digital Library Development Project Exercise Workshop Exercises Exercise Notes for Trainers Exercise 1-A: Library Bibliographic System Metadata 1 Exercise 1-B: Digital Library System Metadata 5 Exercise 2: Sample Use Case 9 Exercise 3: Metadata Analysis 11 Exercise 4: Metadata Mapping—OAI DC Record Creation 33 Exercise 5: Metadata Workflow—Library Publishing 57 Exercise 6: Digital Library Development Project—Slide Collection 77 Exercise 3 Answers 97 Exercise 4 Answers 103 Appendix I—Glossary Appendix II—Selected Bibliography THIS PAGE INTENTIONALLY LEFT BLANK FOR DOUBLE SIDED COPY Instructor's Manual Preliminary Pages FOREWORD In November 2000, the Library of Congress sponsored the Bicentennial Conference on Bibliographic Control for the New Millennium to bring together authorities in the cataloging and metadata communities to discuss outstanding issues involving improved discovery and access to Web resources.
    [Show full text]