Separation of Concerns for Dependable Software Design

Total Page:16

File Type:pdf, Size:1020Kb

Separation of Concerns for Dependable Software Design Separation of Concerns for Dependable Software Design Daniel Jackson and Eunsuk Kang MIT Computer Science and Artificial Intelligence Laboratory 32 Vassar Street, Cambridge, MA 02139 [email protected] ABSTRACT tiously includes collection of detailed statistics and explicit mechanisms for adjusting the process accordingly. For ‘mixed-criticality’ systems that have both critical and non-critical functions, the greatest leverage on dependabil- Testing is used for two very different purposes. On the one ity may be at the design level. By designing so that each hand, it is used to find bugs. Structural tests exploit knowl- critical requirement has a small trusted base, the cost of the edge of the structure of the software to identify bugs in analysis required for a dependability case might be dra- known categories. A mutation test, for example, might fo- matically reduced. An implication of this approach is that cus on the possibility that the wrong operator was selected conventional object-oriented design may be a liability, be- for an expression; a regression test is designed to detect the cause it leads to ‘entanglement’, and an approach based on reoccurrence of a particular flaw. For this kind of testing, a separating services may be preferable. successful test is one that fails, and thus identifies a bug. Categories and Subject Descriptors On the other hand, testing can be used to provide evidence D.2.2 [Software Engineering]: Design Tools and Tech- of dependability. In this case, tests focused on particular niques; D.2.4 Software/Program Verification; D.2.10 De- known bug categories are less useful (since a failure might sign. come from a category that has not been identified). Instead, tests are drawn randomly from the ‘operational profile’ – General Terms the expected profile of use – and statistical inferences are Design, Reliability, Languages, !eory, Verification. made about the likelihood of failure in the field based on the sampling carried out in the tests. For this kind of test- Keywords ing, a successful test is one that succeeds, since a failing test Dependability, software design, separation of concerns, case might not only require a bug fix (which sets the testing object-orientation, formal methods, trusted bases, decou- effort back to square one, since the target of the testing is pling, entanglement, mixed-criticality systems. now a new program on which old results no longer obtain), but provides direct evidence of low quality, thus altering 1. WHY CURRENT APPROACHES FAIL the tester’s assumptions and raising the bar for demonstrat- Today’s software developers rely on two techniques to ing dependability. make their software more dependable: process and testing. For modest levels of dependability, process and testing have In any large development, a defined software process is been found to be effective, and they are widely regarded to essential; without it, standards are hard to enforce, issues be necessary components of any serious development. Ar- fall through cracks, and the organization cannot learn from guments remain about exactly what form process and test- past mistakes. At minimum, a process might include pro- ing should take: whether the process should follow an in- cedures for version control, bug tracking and regression cremental or a more traditional waterfall approach, or testing; typically, it also includes standard structures for whether unit testing or subsystem testing should predomi- documents and guidelines for meetings (eg, for require- nate, for example. But few would argue that process and ments gathering, design and code review); and most ambi- testing are bad ideas. For the high levels of dependability that are required in Permission to make digital or hard copies of all or part of this work for critical applications, however, process and testing – while personal or classroom use is granted without fee provided that copies necessary – do not seem to be sufficient. Despite many are not made or distributed for pro!t or commercial advantage and years of experience in organizations that adhere to burden- that copies bear this notice and the full citation on the !rst page. To some processes and perform extensive testing, there is little copy otherwise, or republish, to post on servers or to redistribute to compelling evidence that these efforts ensure the levels of lists, requires prior speci!c permission and/or a fee. Conference’10, Month 1–2, 2010, City, State, Country. dependability required. Although it seems likely that the Copyright 2010 ACM 1-58113-000-0/00/0010…$10.00. adoption of rigorous processes has an indirect impact on dependability (by encouraging a culture of risk avoidance testing). For this reason, the claim that software is ‘free and attention to detail), evidence of a direct link is missing. from runtime errors’ should be taken with a grain of salt. While it is certainly useful to know that a program will not !e effectiveness of statistical testing lies at the core of the suffer from an arithmetic overflow or exceed the bounds of difficulty, since the most likely way a process might estab- an array, such knowledge does not provide any direct evi- lish dependability would be via testing – in the same way dence that the software will not lead to a catastrophic fail- that industrial manufacturing processes achieve quality by ure. Indeed, the construction of a dependability case may measuring samples as they come off the assembly line. Un- reveal that the risk of runtime errors is not the weakest link fortunately, even if the operational profile can be sampled in the chain. appropriately, and all other statistical assumptions hold, the number of tests that is required to establish confidence is rarely feasible. To claim a failure rate of one in R executions 3. THE COST OF BUILDING A CASE or ‘demands’ to a confidence of 99% requires roughly 5R A great attraction of statistical testing is that its cost does tests to be executed [5]; similarly, a rate of one failure in R not depend on the size of the software being checked. Un- hours requires testing for 5R hours. To meet the oft-stated fortunately, for the analyses that are more likely to be used avionics standard of 109 failure per hour, for example, as the elements of a dependability case, and which rely on would require testing for 5.109 hours – almost a million examining the text of the software, cost increases at least years. linearly with size. !is underlines the importance of designing for simplicity. 2. A DIRECT APPROACH !e smaller and simpler the code, the lower the cost of In the last few years, a different approach has been advo- constructing the case for its correctness; the additional cated, drawing on experience in the field of system safety. upfront cost of a cleaner design will likely prove to be a !e idea, in short, is that instead of relying on indirect evi- good investment. But even if Hoare is right, and ‘inside dence of dependability from process or testing, the devel- every large program there is a small one trying to get out’, it oper provides direct evidence, by presenting an argument may not be possible to find that small program, especially that connects the software to the dependability claim. the first time around. !is argument, which is known as a dependability case, A more realistic aim is to reduce not the size of the entire takes as premises the code of the software system, and as- program, but rather the size of the subprogram that must sumptions about the domains with which it interacts (such be considered to argue that a critical property holds. !is as hardware peripherals, the physical environment and subprogram, the trusted base for the critical property, must human operators), and from these premises, establishes include not only the code that directly implements the rele- more or less formally some particular critical properties. vant functionality, but any other code that might poten- tially undermine the property. !e credibility of a dependability case, like the credibility of any argument, will rest on a variety of social and technical One might think that identifying trusted bases for proper- factors. When appropriate, it may include statistical argu- ties in this way helps only because it allows the rest of the ments, although logical arguments are likely to play a program to be ignored. But an advantage may be gained greater role. To argue that a component has a failure rate of even if the trusted bases cover the entire program. less than one in 109 demands, it may be possible to test the Suppose the dependability case must establish k properties component exhaustively, but if not, testing to achieve suffi- of the system, and that each property has a trusted base of cient statistical confidence is likely to be too expensive, and size B. !e analysis cost is likely to be superlinear; a reason- formal verification will be a more viable option. able guess is that it will be quadratic, to account for the Formal methods will therefore likely play a key role in de- interactions between parts. In that case, the total cost pendability cases, since there are no other approaches that would be kB2. Now if the trusted bases cover the entire can provide comparable levels of confidence at reasonable program (but do not overlap), the size of the total codebase cost. Note, however, that just as rigorous process does not is kB. For the same codebase, but without a factoring into guarantee dependability, the use of formal methods does trusted bases, the cost of the analysis would be k(kB)2, not either. A formal verification is only useful to the extent which is larger by a factor of k2.
Recommended publications
  • An Aggregate Computing Approach to Self-Stabilizing Leader Election
    An Aggregate Computing Approach to Self-Stabilizing Leader Election Yuanqiu Mo Jacob Beal Soura Dasgupta University of Iowa Raytheon BBN Technologies University of Iowa⇤ Iowa City, Iowa 52242 Cambridge, MA, USA 02138 Iowa City, Iowa 52242 Email: [email protected] Email: [email protected] Email: [email protected] Abstract—Leader election is one of the core coordination 4" problems of distributed systems, and has been addressed in 3" 1" many different ways suitable for different classes of systems. It is unclear, however, whether existing methods will be effective 7" 1" for resilient device coordination in open, complex, networked 3" distributed systems like smart cities, tactical networks, personal 3" 0" networks and the Internet of Things (IoT). Aggregate computing 2" provides a layered approach to developing such systems, in which resilience is provided by a layer comprising a set of (a) G block (b) C block (c) T block adaptive algorithms whose compositions have been shown to cover a large class of coordination activities. In this paper, Fig. 1. Illustration of three basis block operators: (a) information-spreading (G), (b) information aggregation (C), and (c) temporary state (T) we show how a feedback interconnection of these basis set algorithms can perform distributed leader election resilient to device topology and position changes. We also characterize a key design parameter that defines some important performance a separation of concerns into multiple abstraction layers, much attributes: Too large a value impairs resilience to loss of existing like the OSI model for communication [2], factoring the leaders, while too small a value leads to multiple leaders.
    [Show full text]
  • A Theory of Fault Recovery for Component-Based Models⋆
    A Theory of Fault Recovery for Component-based Models? Borzoo Bonakdarpour1, Marius Bozga2, and Gregor G¨ossler3 1 School of Computer Science, University of Waterloo Email: [email protected] 2 VERIMAG/CNRS, Gieres, France Email: [email protected] 3 INRIA-Grenoble, Montbonnot, France Email: [email protected] Abstract. This paper introduces a theory of fault recovery for component- based models. We specify a model in terms of a set of atomic components incrementally composed and synchronized by a set of glue operators. We define what it means for such models to provide a recovery mechanism, so that the model converges to its normal behavior in the presence of faults (e.g., in self-stabilizing systems). We present a sufficient condition for in- crementally composing components to obtain models that provide fault recovery. We identify corrector components whose presence in a model is essential to guarantee recovery after the occurrence of faults. We also for- malize component-based models that effectively separate recovery from functional concerns. We also show that any model that provides fault recovery can be transformed into an equivalent model, where functional and recovery tasks are modularized in different components. Keywords: Fault-tolerance; Transformation; Separation of concerns; BIP 1 Introduction Fault-tolerance has always been an active line of research in design and imple- mentation of dependable systems. Intuitively, tolerating faults involves providing a system with the means to handle unexpected defects, so that the system meets its specification even in the presence of faults. In this context, the notion of spec- ification may vary depending upon the guarantees that the system must deliver in the presence of faults.
    [Show full text]
  • Cs8603-Distributed Systems Syllabus Unit 3 Distributed Mutex & Deadlock Distributed Mutual Exclusion Algorithms: Introducti
    CS8603-DISTRIBUTED SYSTEMS SYLLABUS UNIT 3 DISTRIBUTED MUTEX & DEADLOCK Distributed mutual exclusion algorithms: Introduction – Preliminaries – Lamport‘s algorithm –Ricart-Agrawala algorithm – Maekawa‘s algorithm – Suzuki–Kasami‘s broadcast algorithm. Deadlock detection in distributed systems: Introduction – System model – Preliminaries –Models of deadlocks – Knapp‘s classification – Algorithms for the single resource model, the AND model and the OR Model DISTRIBUTED MUTUAL EXCLUSION ALGORITHMS: INTRODUCTION INTRODUCTION MUTUAL EXCLUSION : Mutual exclusion is a fundamental problem in distributed computing systems. Mutual exclusion ensures that concurrent access of processes to a shared resource or data is serialized, that is, executed in a mutually exclusive manner. Mutual exclusion in a distributed system states that only one process is allowed to execute the critical section (CS) at any given time. In a distributed system, shared variables (semaphores) or a local kernel cannot be used to implement mutual exclusion. There are three basic approaches for implementing distributed mutual exclusion: 1. Token-based approach. 2. Non-token-based approach. 3. Quorum-based approach. In the token-based approach, a unique token (also known as the PRIVILEGE message) is shared among the sites. A site is allowed to enter its CS if it possesses the token and it continues to hold the token until the execution of the CS is over. Mutual exclusion is ensured because the token is unique. In the non-token-based approach, two or more successive rounds of messages are exchanged among the sites to determine which site will enter the CS next. Mutual exclusion is enforced because the assertion becomes true only at one site at any given time.
    [Show full text]
  • Separation of Concerns in Multi-Language Specifications
    INFORMATICA, 2002, Vol. 13, No. 3, 255–274 255 2002 Institute of Mathematics and Informatics, Vilnius Separation of Concerns in Multi-language Specifications Robertas DAMAŠEVICIUS,ˇ Vytautas ŠTUIKYS Software Engineering Department, Kaunas University of Technology Student¸u 50, 3031 Kaunas, Lithuania e-mail: [email protected], [email protected] Received: April 2002 Abstract. We present an analysis of the separation of concerns in multi-language design and multi- language specifications. The basis for our analysis is the paradigm of the multi-dimensional sepa- ration of concerns, which claims that multiple dimensions of concerns in a design should be imple- mented independently. Multi-language specifications are specifications where different concerns of a design are implemented using separate languages as follows. (1) Target language(s) implement domain functionality. (2) External (or scripting, meta-) language(s) implement generalisation of the repetitive design features, introduce variations, and integrate components into a design. We present case studies and experimental results for the application of the multi-language specifications in hardware design. Key words: multi-language design, separation of concerns, meta-programming, scripting, hardwa- re design. 1. Introduction Today the high-level design of a hardware (HW) system is usually based either on the usage of the pure HW description language (HDL) (e.g., VHDL, Verilog), or the algo- rithmic one (e.g., C++, SystemC), or both. The emerging problems are similar for both, the HW and software (SW) domains. For example, the increasing complexity of SW and HW designs urges the designers and researchers to seek for the new design methodolo- gies.
    [Show full text]
  • Composition of Software Architectures Christos Kloukinas
    Composition of Software Architectures Christos Kloukinas To cite this version: Christos Kloukinas. Composition of Software Architectures. Computer Science [cs]. Université Rennes 1, 2002. English. tel-00469412 HAL Id: tel-00469412 https://tel.archives-ouvertes.fr/tel-00469412 Submitted on 1 Apr 2010 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Composition of Software Architectures - Ph.D. Thesis - - Presented in front of the University of Rennes I, France - - English Version - Christos Kloukinas Jury Members : Jean-Pierre Banâtre Jacky Estublier Cliff Jones Valérie Issarny Nicole Lévy Joseph Sifakis February 12, 2002 Résumé Les systèmes informatiques deviennent de plus en plus complexes et doivent offrir un nombre croissant de propriétés non fonctionnelles, comme la fiabi- lité, la disponibilité, la sécurité, etc.. De telles propriétés sont habituellement fournies au moyen d’un intergiciel qui se situe entre le matériel (et le sys- tème d’exploitation) et le niveau applicatif, masquant ainsi les spécificités du système sous-jacent et permettant à des applications d’être utilisées avec dif- férentes infrastructures. Cependant, à mesure que les exigences de propriétés non fonctionnelles augmentent, les architectes système se trouvent confron- tés au cas où aucun intergiciel disponible ne fournit toutes les propriétés non fonctionnelles visées.
    [Show full text]
  • Multi-Dimensional Concerns in Parallel Program Design
    Multi-Dimensional Concerns in Parallel Program Design Parallel programming is difficult, thus design patterns. Despite rapid advances in parallel hardware performance, the full potential of processing power is not being exploited in the community for one clear reason: difficulty of designing parallel software. Identifying tasks, designing parallel algorithm, and managing the balanced load among the processing units has been a daunting task for novice programmers, and even the experienced programmers are often trapped with design decisions that underachieve in potential peak performance. Over the last decade there have been several approaches in identifying common patterns that are repeatedly used in parallel software design process. Documentation of these “design patterns” helps the programmers by providing definition, solution and guidelines for common parallelization problems. Such patterns can take the form of templates in software tools, written paragraphs with detailed description, or even charts, diagrams and examples. The current parallel patterns prove to be ineffective. In practice, however, the parallel design patterns identified thus far [over the last two decades!] contribute little—if none at all—to development of non-trivial parallel software. In short, the parallel design patterns are too trivial and fail to give detailed guidelines for specific parameters that can affect the performance of wide range of complex problem requirements. By experience in the community, it is coming to a consensus that there are only limited number of patterns; namely pipeline, master & slave, divide & conquer, geometric, replicable, repository, and not many more. So far the community efforts were focused on discovering more design patterns, but new patterns vary a little and fall into range that is not far from one of the patterns mentioned above.
    [Show full text]
  • Lecture Notes in Computer Science 1543 Edited by G
    Lecture Notes in Computer Science 1543 Edited by G. Goos, J. Hartmanis and J. van Leeuwen 3 Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Singapore Tokyo Serge Demeyer Jan Bosch (Eds.) Object-Oriented Technology ECOOP ’98 Workshop Reader ECOOP ’98 Workshops, Demos, and Posters Brussels, Belgium, July 20-24, 1998 Proceedings 13 Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Serge Demeyer University of Berne Neubruckstr. 10, CH-3012 Berne, Switzerland E-mail: [email protected] Jan Bosch University of Karlskrona/Ronneby, Softcenter S-372 25 Ronneby, Sweden E-mail: [email protected] Cataloging-in-Publication data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Object-oriented technology : workshop reader, workshops, demos, and posters / ECOOP ’98, Brussels, Belgium, July 20 - 24, 1998 / Serge Demeyer ; Jan Bosch (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Singapore ; Tokyo : Springer, 1998 (Lecture notes in computer science ; Vol. 1543) ISBN 3-540-65460-7 CR Subject Classification (1998): D.1-3, H.2, E.3, C.2, K.4.3, K.6 ISSN 0302-9743 ISBN 3-540-65460-7 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag.
    [Show full text]
  • Using Architectural Styles in Design ‹ the Nature of Computation ‹ the Quality Concerns
    Which Architectural Style should be used? The selection of software architectural style should take into consideration of two factors: Using Architectural Styles in Design The nature of computation The quality concerns Lecture 6 U08182 © Peter Lo 2010 1 U08182 © Peter Lo 2010 2 In this lecture you will learn: The selection of software architectural style should take into consideration of two • Use of software architectural styles in design factors: • Selection of appropriate architectural style • The nature of the computation problem to be solved. e.g. the input/output data structure and features, the required functions of the system, etc. • Combinations of styles • The quality concerns that the design of the system must take into • Case studies consideration, such as performance, reliability, security, maintenance, reuse, • Case study 1: Keyword Frequency Vector (KFV) modification, interoperability, etc. • Case study 2: Keyword in Context (KWIC) 1 2 Characteristic Features of Architectural Behavioral Features of Architectural Styles Styles Control Topology Synchronicity Data Topology Data Access Mode Control/Data Interaction Topology Data Flow Continuity Control/Data Interaction Direction Binding Time U08182 © Peter Lo 2010 3 U08182 © Peter Lo 2010 4 Control Topology: Synchronicity: • How dependent are the components' action upon each other's control state? • What geometric form does the control flow for the systems of the style take? • Lockstep – One component's state implies the other component's state. • E.g. In the main-program-and-subroutine style, components must be organized must • Synchronous – Components synchronies at certain states to make sure they are in the be organized into a hierarchical structure. synchronized states at the same time in order to cooperate, but the relationships on other states are not predictable.
    [Show full text]
  • Workshop on Multi-Dimensional Separation of Concerns in Software Engineering
    ACM SIGSOFT Software Engineering Notes vol 26 no 1 January 2001 Page 78 Workshop on Multi-Dimensional Separation of Concerns in Software Engineering Peri Tarr, William Harrison, Harold Ossher (IBM T. I. Watson Research Center, USA) Anthony Finkelstein (University College London, UK) Bashar Nuseibeh (Imperial College, UK) Dewayne Perry (University of Texas at Austin, USA) Workshop Web site: http://www.research.ibm.com/hyperspace/workshops/icse2000 ABSTRACT cem may promote some goals and activities, while impeding oth- Separation of concerns has been central to software engineering ers; thus, any criterion for decomposition will be appropriate fm for decades, yet its many advantages are still not fully realized. A some contexts, but not for all. Further, multiple kinds of concerns key reason is that traditional modularization mechanisms do not may be relevant simultaneously, and they may overlap and inter- allow simultaneous decomposition according to multiple kinds of act, as features and classes do. Thus, different concerns and (overlapping and interacting) concerns. This workshop was in- modularizations are needed for different purposes: sometimes by tended to bring together researchers working on more advanced class, sometimes by feature, sometimes by viewpoint, or aspect, moclularization mechanisms, and practitioners who have experi- role, variant, or other criterion. enced the need for them, as a step towards a common understand- These considerations imply that developers must be able to iden- ing of the issues, problems and research challenges. tify, encapsulate, modularize, and wanipulate multiple dimensions Keywords of concern simultaneously, and to introduce new concerns and Separation of concerns, decomposition, composition dimensions at any point during the software lifecycle, without suf- fering the effects of invasive modification and rearchitecture.
    [Show full text]
  • Preserving Separation of Concerns Through Compilation
    Preserving Separation of Concerns through Compilation A Position Paper Hridesh Rajan Robert Dyer Youssef Hanna Harish Narayanappa Dept. of Computer Science Iowa State University Ames, IA 50010 {hridesh, rdyer, ywhanna, harish}@iastate.edu ABSTRACT tempt the programmer to switch to another task entirely (e.g. Today’s aspect-oriented programming (AOP) languages pro- email, Slashdot headlines). vide software engineers with new possibilities for keeping The significant increase in incremental compilation time conceptual concerns separate at the source code level. For a is because when there is a change in a crosscutting concern, number of reasons, current aspect weavers sacrifice this sep- the effect ripples to the fragmented locations in the com- aration in transforming source to object code (and thus the piled program forcing their re-compilation. Note that the very term weaving). In this paper, we argue that sacrificing system studied by Lesiecki [12] can be classified as a small modularity has significant costs, especially in terms of the to medium scale system with just 700 classes and around speed of incremental compilation and in testing. We argue 70 aspects. In a large-scale system, slowdown in the de- that design modularity can be preserved through the map- velopment process can potentially outweigh the benefits of ping to object code, and that preserving it can have signifi- separation of concerns. cant benefits in these dimensions. We present and evaluate a Besides incremental compilation, loss of separation of con- mechanism for preserving design modularity in object code, cerns also makes unit testing of AO programs difficult. The showing that doing so has potentially significant benefits.
    [Show full text]
  • Versatility and Efficiency in Self-Stabilizing Distributed Systems
    Versatility and Efficiency in Self-Stabilizing Distributed Systems Stéphane Devismes To cite this version: Stéphane Devismes. Versatility and Efficiency in Self-Stabilizing Distributed Systems. Distributed, Parallel, and Cluster Computing [cs.DC]. Université Grenoble Alpes, 2020. tel-03080444v4 HAL Id: tel-03080444 https://hal.archives-ouvertes.fr/tel-03080444v4 Submitted on 21 Sep 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. HABILITATION À DIRIGER DES RECHERCHES Spécialité : Informatique Arrêté ministériel : 23 Novembre 1988 Présentée par Stéphane Devismes Habilitation préparée au sein du laboratoire VERIMAG et de l’École Doctorale Mathématiques, Sciences et Technologies de l’Information, Informatique Versatility and Efficiency in Self-Stabilizing Distributed Systems Généralité et Efficacité dans les Systèmes Distribués Autostabilisants Habilitation soutenue publiquement le 17 décembre 2020, devant le jury composé de : Denis TRYSTRAM Professeur, Grenoble INP, Président Toshimitsu MASUZAWA Professeur, Osaka University, Rapporteur Achour MOSTÉFAOUI
    [Show full text]
  • Session 8: from Analysis and Design to Software Architectures (Part I Addendum)
    Software Engineering G22.2440-001 Session 8 – Sub-Topic 1 From Analysis and Design to Software Architectures (Part I Addendum) Dr. Jean-Claude Franchitti New York University Computer Science Department Courant Institute of Mathematical Sciences 1 Agenda • Towards Agile Enterprise Architecture Models • Building an Object Model Using UML • Architectural Analysis • Design Patterns • Architectural Patterns • Architecture Design Methodology • Achieving Optimum-Quality Results • Selecting Kits and Frameworks • Using Open Source vs. Commercial Infrastructures 2 1 Part I Towards Agile Enterprise Architecture Models 3 There is an Urgent Need for More Reliable, Concept-driven, Early Requirements Processes (review) • SDLC Process model frameworks have matured and converged significantly on effective practices in system development phases, however, they are still weakest on the front-end. RUP Phases: Inception ElaborationConstruction Transition • Project Planning • Hi-level languages • QA Methods • ??? • Use Case Models • Frameworks • Test environments •… • Object Models •IDEs • Issue support tools • Data Models • n-Tier Architectures • Usability • UI prototypes • Design Patterns Engineering •… •… •… MSF Phases: Envisioning PlanningDeveloping Stabilizing / Deploying Less mature methods / support More mature methods / support 4 2 XP Practices (review) • The Planning Game – determine the scope of the next release, based on business priorities and technical estimates. • Small Releases – every release should be as small as possible, containing the most valuable business requirements. • Metaphor – guide all development with a simple, shared story of how the whole system works. • Simple Design – design for the current functionality, not future needs, and make the design as simple as possible for that functionality (YAGNI). • Refactoring – ongoing redesign of software to improve its responsiveness to change. • Testing – developers write the unit tests before the code; unit tests are run “continuously”; customers write the functional tests.
    [Show full text]