Lecture Notes in Computer Science 9400
Total Page:16
File Type:pdf, Size:1020Kb
Lecture Notes in Computer Science 9400 Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen Editorial Board David Hutchison Lancaster University, Lancaster, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Zürich, Switzerland John C. Mitchell Stanford University, Stanford, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Dortmund, Germany Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max Planck Institute for Informatics, Saarbrücken, Germany More information about this series at http://www.springer.com/series/7408 Betty H.C. Cheng • Benoit Combemale Robert B. France • Jean-Marc Jézéquel Bernhard Rumpe (Eds.) Globalizing Domain-Specific Languages International Dagstuhl Seminar Dagstuhl Castle, Germany, October 5–10, 2014 Revised Papers 123 Editors Betty H.C. Cheng Jean-Marc Jézéquel Michigan State University IRISA, Université de Rennes 1 East Lansing, MI Rennes USA France Benoit Combemale Bernhard Rumpe IRISA, Université de Rennes 1 RWTH Aachen University Rennes Aachen France Germany Robert B. France (†) Colorado State University Fort Collins, CO USA ISSN 0302-9743 ISSN 1611-3349 (electronic) Lecture Notes in Computer Science ISBN 978-3-319-26171-3 ISBN 978-3-319-26172-0 (eBook) DOI 10.1007/978-3-319-26172-0 Library of Congress Control Number: 2015953257 LNCS Sublibrary: SL2 – Programming and Software Engineering Springer Cham Heidelberg New York Dordrecht London © Springer International Publishing Switzerland 2015 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. Printed on acid-free paper Springer International Publishing AG Switzerland is part of Springer Science+Business Media (www.springer.com) Foreword Dismantling the “Tower of Babel” The design and construction of complex technical systems such as cyber-physical systems has always been a difficult process due to the sheer number and diversity of design decisions that need to be made. What makes this particularly challenging is the fact that many of these design choices are interdependent, so that selecting a particular alternative constrains the choices available for other decisions. This issue is becoming more vexing as the number of potential technical alternatives increases with technical progress while, at the same time, there is an ever-growing demand for more sophis- ticated system functionality. To make this intricate task manageable, engineers have throughout history relied on the trusted “divide and conquer” technique; that is, first decomposing a complex problem to be solved into more manageable sub-problems, then solving each of them individually, and, finally, combining the resulting solutions into a single integrated system. Depending on the size and complexity of the system under consideration, the same approach may have to be applied at multiple hierar- chically nested levels. A fundamental weakness of this approach is that any decomposition of the original problem is itself a critical design choice that invariably has repercussions on many key downstream design decisions. Exacerbating this is the fact that the decomposition decision has to be made early, when least is known about the problem and possible solutions. Consequently, it is not uncommon to see that, as development progresses and problem understanding grows, engineers begin to realize that their chosen decompo- sition is suboptimal and that others would have been more suitable. Unfortunately, all too often this realization comes too late because significant time, resources, and effort may have already been expended developing the system based on the original decomposition, meaning that any major re-engineering would be either impractical or prohibitively expensive. To minimize the risk of inappropriate decompositions, it is necessary for the design and development teams to gain insight and experience with the problem on hand as early as possible. Clearly, since the system to be developed does not exist until it is implemented, another age-old engineering technique is used: the use of models to represent the intended design of components yet to be built. These engineering models can take many different forms, including renderings that only exist as ideas in designers heads. However, to be truly useful, an engineering model must take on a concrete form so that it can be communicated to other stakeholders as well as analyzed for suitability. Formal models in particular are desirable since they can be analyzed by formal (i.e., mathematical or logical) means, which, if the analyses are conducted correctly, are more likely to yield correct and, hence, more trustworthy results. Moreover, if the VI Foreword models are realized on a computer, many of the mechanistic aspects of the analysis procedure can be automated. While computer-based models can help provide more trustworthy analysis results, they do not on their own solve the problem of design decision inter-dependencies unless, of course, the models themselves are linked in appropriate ways. How to link models, particularly models of different subsystems, is one of the primary themes of this volume. Until we had computer-based subsystem models, the only way to link models was through textual reference or human recall methods that have proven highly error prone at best. With the use of computer-supported hyperlinking it is possible to make such links not only more reliable but, even more importantly, more meaningful. This is because it is possible for them to account in various ways for the specific semantics of the model elements that are being linked. Thus, a hyperlink between a software component in a model of the software and an element representing a processor in a model of the intended hardware platform can be customized to capture the specific nature of the software deployment relationship. With that information it might be possible, for example, to compute whether or not the processor is sufficiently powerful to support that software component. But, even with such computer-aided support, one major problem remains: what if the models whose elements are to be linked are specified using different modeling languages? This is much more than just a syntactical issue: different modeling lan- guages may be based on very different semantic foundations. For example, time in one language may be modeled as a continuous quantity whereas it may be discrete in another. Or, one language may be used to model software while another represents the thermal aspects of a system – two radically different domains based on very different paradigms and different ontologies. Yet, although the various languages used in the design of a complex system deal with different phenomena, they all describe compo- nents of the same system, including some that are simultaneously present in multiple models but in different forms. How should this heterogeneity of paradigms and languages be handled? What is the nature of these “semantically aware” hyperlinks? The salient approaches proposed here may come as a surprise to some, particularly given the use of the term “globalization”, which might in the minds of some people connote an Esperanto-like homogenization and unification. What we see, instead, is a more subtle and a much more pragmatic approach based on integration rather than replacement. Even a superficial analysis on how much has been invested in existing technologies and languages reveals that it would be impractical to replace them with some new “universal” language and toolset combination – even if we could realistically devise one. The history of technological progress teaches us that most viable technical disciplines are constantly evolving, sprouting new offshoots (i.e., new sub-disciplines) along the way. Each of these is a specialization of another specialization, all of it driven by the need to reason concisely and yet accurately about domain-specific concerns and phenomena. This leads to an ever-increasing number of domain-specific modeling paradigms and languages. Without doubt, this trend will never cease. Consequently, “globalization” as used in this volume means co-existence, and more specifically, collaboration at