Best Practice in Robotics Grant Agreement Number: 231940 Funding Period: 01.03.2009 – 28.02.2013 Instrument: Collaborative Project (IP)

Deliverable D-2.4: Robot Control Architecture Workbench Research Results on Architectures in Robotics: Aspects, Models, Patterns, and Implementations

Author names: Gerhard K. Kraetzschmar (PI) Frederik Hegger Nico Hochgeschwender Jan Paulus Michael Reckhaus Azamat Shakhimardanov Lead contractor for this deliverable: Bonn-Rhein-Sieg University (BRSU) Due date of deliverable: 28.02.2013 Actual submission date: 28.02.2013 Dissemination level: PU/RE Revision: 1.0 2013 by BRICS Team at BRSU ii Revision 1.0 Abstract

Work package WP2 of the BRICS project was concerned with Architectures, Middleware, and Interfaces. This deliverable is mainly concerned with architectures, but also provides a compre- hensive review and survey of closely related aspects. During BRICS, we looked intensively into the architecture problem and gained a lot of in- sights that will hopefully help the robotics community to lead more coherent and structured discussions on robot architectures. One insight is that "standardized" architectures are cur- rently an unrealistic dream for robotics applications that combine state-of-the-art actuation, sensing, processing, and cognition. In this situation, it does not make sense to design a robot control architecture workbench that tries to provide such standardized architectures. Instead, we identifed both common elements and repeating patterns in robot architectures. Our work concentrated then on providing the mechanisms necessary to provide the common elements in generic, reusable ways, and to develop languages to describe the repeating patterns. This work is much more general and extensive than what was initially foreseen, and we do not consider it completed yet; any claim on the completeness of the identified patterns, or even on the coverage of certain areas would be premature due to number and size of existing architectures that need to be investigated and the limited resources we had available. Also, the quality of tool support critically depends on feedback from practical use, and it is clear that there is still a lot of room for improvement. However, some solid foundations have been laid. We identified six different aspects of architecture, which help to clarify the discussion of archi- tectures. We also defined and described a comprehensive development process for building robot applications, and each of the architecture aspects has a clear role in the development process. The third contribution consist of looking at architecture patterns from a robotics perspective. We also the describe the means developed in BRICS to model architectures, and finally the tools developed to implement them. Acknowledgements

The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under Grant Agreement No. 231940.

The contributions of our coworkers from the BRICS consortium are gratefully acknowledged. In particular, we thank

• Hugo Garcia and Herman Bruyninckx for their work on BRIDE,

• Markus Klotzbücher and Herman Bruyninckx for their contributions to the BRICS Com- ponent Model,

• Yury Brodsky, Robert Wilterdink, and Stefano Stramigioli for their XXXX

• Luca Gherardi and Davide Brugali from University of Bergamo for their work on variability modeling and software product families, their feedback on BRICS RAP, the robot appli- cation development process, and the joint organization of the BRICS Research Camp 4 on Architectures,

• Alexander Bubeck, Florian Weisshardt, and Ulrich Reiser from Fraunhofer IPA for their great support for the Care-O-bot 3 platform during various research camps, exhibition events (IROS-2011), and RoboCup@Home competitions,

• Alexander Bubeck, Ulrich Reiser, and Martin Hägele for joint work on the BRIDE/ROS integration as well as the activities related to the Showcase Education, and

• Nicola Tomatis from Bluebotics for feedback on BRICS RAP.

c 2013 by BRICS Team at BRSU ii Revision 1.0 Contents

1 Introduction 1 1.1 The BRICS Project Context...... 1 1.2 Review of WP2 Objectives...... 1 1.3 Summary of Major Results...... 4 1.4 Overview on the Report...... 4

2 Aspects of Robot Architectures5 2.1 On Architectures in Robotics...... 5 2.2 Architectural Aspects...... 6

3 Architecture Aspects and RAP9 3.1 Phase 1: Scenario Building...... 10 3.1.1 Step 1-1: Scenario Definition...... 11 3.1.2 Step 1-2: Scenario Generalization...... 11 3.1.3 Step 1-3: Scenario Simulation Modeling...... 11 3.1.4 Step 1-4: Customer Acceptance Test Definition...... 11 3.1.5 Notes on Phase 1...... 11 3.2 Phase 2: Functional Design...... 12 3.2.1 Step 2-1: Hardware Requirements...... 12 3.2.2 Step 2-2: Functional Requirements...... 13 3.2.3 Step 2-3: Functional Decomposition...... 13 3.2.4 Step 2-4: Functional Validation...... 13 3.2.5 Notes on Phase 2...... 13 3.3 Phase 3: Platform Building...... 14 3.3.1 Step 3-1: Hardware Platform Configuration...... 14 3.3.2 Step 3-2: Software Platform Configuration...... 14 3.3.3 Step 3-3: Robot Emulation Model Definition...... 15 3.3.4 Step 3-4: System Component Testing...... 15 3.3.5 Notes on Phase 3...... 16 3.4 Phase 4: Capability Building...... 17 3.4.1 Step 4-1: (Composite) Component Construction...... 18 3.4.2 Step 4-2: Deployment Constraints Specification...... 19 3.4.3 Step 4-3: Content Creation...... 19 3.4.4 Step 4-4: Skills and Capabilities Testing...... 20 3.4.5 Notes on Phase 4...... 20 3.5 Phase 5: System Deployment...... 20 3.5.1 Step 5-1: Systems Packaging...... 21 3.5.2 Step 5-2: Runtime Architecture Construction...... 21 3.5.3 Step 5-3: System Launch Management...... 21 3.5.4 Step 5-4: System-Level Testing...... 21

iii Contents Contents

3.6 Phase 6: System Benchmarking...... 21 3.6.1 Step 6-1: Components, Skills, and System Stress Testing...... 22 3.6.2 Step 6-2: Safety and Security Checks...... 22 3.6.3 Step 6-3: Reliability and Durability Assessment...... 22 3.6.4 Step 6-4: Performance Testing...... 22 3.7 Phase 7: Product Deployment...... 22 3.7.1 Step 7-1: Target Platform Component Identification...... 23 3.7.2 Step 7-2: Target Platform Resources Allocation...... 24 3.7.3 Step 7-3: Maintenance Instrumentation...... 24 3.7.4 Step 7-4: Target Platform System Testing...... 24 3.8 Phase 8: Product Maintenance...... 24 3.8.1 Step 8-1: Log Analysis...... 24 3.8.2 Step 8-2: System Tuning...... 25 3.8.3 Step 8-3: System Extension...... 25 3.8.4 Step 8-4: On-Site Testing...... 25

4 Patterns in Robot Architectures 27 4.1 Architecture Aspects Revisited...... 27 4.2 Towards Architectural Patterns...... 28 4.2.1 On Architectural Patterns in Robotics...... 29 4.2.2 Pattern Schema...... 30 4.2.3 Example: The Pipeline Pattern...... 31

5 Modeling Robot Architectures 35 5.1 MDD and DSLs...... 35 5.2 CMs and CM Concepts...... 35 5.2.1 Component Model Concepts...... 36 5.2.2 Component Models Concepts in Robot Programming Frameworks.... 38 5.2.3 Summary on Component Model Concepts...... 41 5.3 Dist+Comm in CMs...... 42 5.3.1 Motivation...... 42 5.3.2 A Use Case...... 42 5.3.3 Concepts...... 44 5.3.4 Communication and Connection Model...... 46 5.3.5 Software connectors in robotics...... 46 5.3.6 Protocol Stack View (PSV)...... 47 5.3.7 Experimental Analysis of Message Latencies...... 49 5.4 DSL for Task and Motion Descriptions...... 53 5.5 DSL for Kinematics and Dynamics Computations...... 56 5.5.1 Synthesis...... 58 5.5.2 Domain specific composition rules and their semantics...... 59 5.6 DSL for System Deployment...... 62 5.6.1 Model-based Software Deployment...... 63 5.6.2 The BRICS Research Camp Use Case...... 63 5.6.3 A Domain-specific Language for Deployment...... 69 5.6.4 Implementation...... 69 5.6.5 Model Generation for Robot Software Frameworks...... 71 5.6.6 Discussion and Related Work...... 71 5.6.7 Requirements Revisited...... 71

c 2013 by BRICS Team at BRSU iv Revision 1.0 Contents Contents

6 Implementing Robot Architectures 73 6.1 OODL Interface Guidelines...... 73

Revision 1.0 v c 2013 by BRICS Team at BRSU List of Figures

2.1 The architecture aspects used in BRICS...... 6

3.1 Phases of the BRICS Robot Application Development Process...... 9 3.2 Overview and phases and steps of the BRICS Robot Application Development Process...... 10 3.3 The scenario building phase of the BRICS robot application development process. 10 3.4 The platform building phase of the BRICS robot application development process. 12 3.5 The platform building phase of the BRICS robot application development process. 14 3.6 The capability building phase of the BRICS robot application development process. 17 3.7 The system building phase of the BRICS robot application development process. 21 3.8 The benchmarking phase of the BRICS robot application development process.. 22 3.9 The deployment phase of the BRICS robot application development process.... 23 3.10 The maintenance phase of the BRICS robot application development process... 24

4.1 The pipeline pattern with (optionally) several data sources (sensors) and a start/stop mechanism. The pattern consists of an initial processing step, arbitrary interme- diate processing steps, and one final processing step to produce the final result.. 32

5.1 Different steps of the analysis and assessment...... 39 5.2 A component network (ROS nodes) on a Care-O-bot 3 robot performing tasks in a domestic environment...... 43 5.3 An excerpt of the communication and connection meta model...... 46 5.4 The Protocol Stack View and its relation to the ISO/OSI network reference model. 47 5.5 Components of a robot task...... 53 5.6 Furniture assembly scenario...... 55 5.7 Relation between task-skill-motion layers and the metametamodel...... 55 5.8 Graph model of a robotic kinematic mechanism...... 56 5.9 The figure visualizes the model-based software deployment approach. The ap- proach includes four phases of the development process shown on top. In each phase models are created. A color coding is used to show which models are created in which phase. The models are either used (referenced) in other models or serve as an input/output for a tool...... 64 5.10 The fetch and carry application used for evaluation...... 64 5.11 The computational architecture meta-model...... 65 5.12 The feature model that describes the functional variability of the case study. Fea- tures with the black circle on the top are mandatory and represent functional- ities that have to be present in every application. Features with the white cir- cle represent instead optional functionalities. White arcs represent alternative containments (only one sub-feature can be selected) while black arcs depicts or containments (at least one sub-feature has to be selected)...... 65

vi List of Figures List of Figures

5.13 The deployment sequence meta-model...... 67 5.14 The runtime architecture meta-model...... 68 5.15 The deployment meta-model...... 68 5.16 An excerpt of the deployment DSL...... 70 5.17 An excerpt of the Orocos RTT-specifc OCL constraints...... 70 5.18 An excerpt of the platform OCL constraints...... 71

6.1 The different system abstraction layers used in BRICS...... 74

Revision 1.0 vii c 2013 by BRICS Team at BRSU List of Tables

5.1 Overview on component modeling primitives in different robot software systems. 41 5.2 Protocol stack view of some existing technology frameworks...... 48 5.3 Latencies measured for ZeroMQ and ROS for transmission of 1000 messages of size 100/2000 KB with periodicity 10/30/100 Hz via TCP from one publisher to 1/10/50 subscribers. Experiment parameters are given in first four columns. Timings relate to the one-way latency of transmitting a single message between the publisher and any of its subscribers. The next four columns give the minimum and maximum values, plus the mean and standard deviation for each experiment. The last four columns gives the same values, normalized by the number of subscribers and by a standard message size of 100 KB...... 52

viii Chapter 1

Introduction

The work described in this report was performed within work package WP2 of the EU project BRICS. In this first chapter, we briefly describe the project context, then review the work program of the BRICS work package WP2 and survey the major results. Finally, we overview the structure of this report. Note: As the purpose of this document is to report on activities, concepts, and systems that have been previously specified in a specifications document, Deliverable D2.2??, some sections in this report may repeat text passages or modifications thereof in this report. This is by intention, in order to document of what of the things previously specified have meanwhile be implemented, and how, and with which modifications, and why.

1.1 The BRICS Project Context

BRICS1 addressed the need of the robotics research community for common robotics research platforms, which support integration, evaluation, comparison and benchmarking of research re- sults and the promotion of best practice in robotics. In a poll in the robotics research community performed in December 2007 95% of the participants have called for such platforms. Common research platforms are beneficial for the robotics community, both academic and industrial. The academic community can save resources, which typically would have to be invested in from scratch developments and me-too approaches. Means for simpler comparison of scientific results promote a culture of sound experimentation and comparative evaluation. Common research platforms foster more rapid technology transfer to industrial prototypes, which supports the development of new robotics systems and applications and reduces the time to market. In order to achieve these objectives the BRICS project proposed – as one of its central research objectives – the development of a tool chain that supports rapid and flexible configuration of new robot platforms and the development of sophisticated robot applications. The objective of work package WP2 on Architecture, Middleware, and Interfaces was to provide fundamental software components using state-of-the-art software technologies and to embed these components into the tool chain.

1.2 Review of Work Package 2 Objectives and Work Program

Software development for robotics applications is a very time-consuming and error-prone process. In previous work [169] we identified hardware heterogeneity, distributed realtime computing, and software heterogeneity as major sources responsible for the complexity of software development in

1This section is a modest revision of the BRICS in a nutshell section of the project proposal.

1 1.2. Review of WP2 Objectives Chapter 1. Introduction robotics. When, as is the case for BRICS, the objective is to speed up the development process, then measures should be taken to tame the difficulties arising from these problem sources. One serious consequence of the above-mentioned problem sources is that many major inno- vative and complex service robot applications, e.g. in many EU-funded projects, seem to be built almost completely from scratch, with little or no design or code reuse from previous projects (see e.g. [105][88][89][87][92]). Software modules with almost the same functionality have often completely different interfaces when developed in different projects (see e.g. interfaces in ORCA [36][38], Player, [77][78][172], Orocos [163][43], Miro [171][170], OPRoS [49], OpenRTM [166] [13] and ROS [75][128][135]). So, in order to improve the situation of robot application development, we needed to find a way to deal with hardware heterogeneity, to handle the implications of distributed software development, to cope with software heterogeneity, and to improve software reuse. For the latter issue, WP2 planned to look especially into architectural aspects of reuse. The work plan of WP2 was structured in the following way, based on an initial assumption of a suitable structure of the software architecture:

• Task 2.1 was all about assessment of best practice and the specification of concepts for later work and led to Deliverables D2.1 and D2.2, which set the context and targets for subsequent activities.

• Task 2.2. actually fell into two subtasks: The first was to implement an Object-Oriented Device Layer (OODL), which allows to abstract over the heterogeneity of vendor-specific hardware interfaces and provides object-oriented interfaces following well-defined design guidelines. The second part was about a network-transparent services layer, meant to provide interfaces to devices or software components across networks in a transparent way by wrapping the OODL interfaces with suitable middleware.

• Task 2.3 was planned in order to provide a mechanism allowing for a more coherent way to design software modules and to enable easier reuse. The consortium decided very early in the project lifetime to adopt a component-based software development approach for this target and put substantial effort into developing the BRICS Component Model (BCM).

• Task 2.4 was planned to provide a robot control architecture workbench, mainly be identify- ing commonly used architectural patterns in successfully implemented robot architectures.

One of the architectural patterns already used early on in the project was to use a layered software architecture. A first step to tackle the problem of hardware heterogeneity was then to encapsulate the intricacies of vendor-specific interfaces to hardware devices using an object-oriented device layer (OODL). Equivalently, an OODL interface can be designed and implemented to encapsulate any legacy software (an implementation of an algorithm, a software library, or a completely independent software system) that needs to be used and integrated into the robot application, but does not follow commonly used object-oriented interface standards. This would, at least partially, address the problem of software heterogeneity. The second step foreseen was to provide network-transparent access to the objects encap- sulating devices and algorithms in the OODL in a network-transparent services layer (NTSL), which allows for network-independent access to the object services by employing widely used middleware technology. This approach was more or less abandoned during the project lifetime and replaced by a different idea: to determine the distribution of software components across a computer network as late as possible during the development process. This allows developers of actual functionality to focus on this aspect and more or less forget about the distribution aspects and its implications. The distribution is now determined during the deployment phase. It is then possible to determine which component connections actually cross process boundaries

c 2013 by BRICS Team at BRSU 2 Revision 1.0 Chapter 1. Introduction 1.2. Review of WP2 Objectives and require the use of network communication. These connections can then, in principle, be automatically configured, using different middleware and communication technologies on a case by case basis. In practice, the connections are usually mapped automatically to the connection technology used by a particular target robot programming framework such as ROS. Summa- rizing, Task 2.2 was already addressing the three issues of hardware heterogeneity, distributed systems, and software heterogeneity, albeit still on a comparatively low level. Already during project kickoff, it became obvious that component-based software and model- driven software development would become important elements in our work to realize the BRICS objectives. We introduced the component layer (CL) as an additional layer in the software architecture. Its main purpose is to further harmonize the means of application to deal with reusable elements and to allow for easier configuration, connection, and deployment of components within an application. The BRICS Component Model was jointly developed by several partners in the consortium and provides the means to describe components on the model level. In previous reports and papers, we stated that the architectures (of typical robot applica- tions), even when designed for the same application, are often different to an extent, that even an expert developer needs substantial time to understand the architecture designed by someone else. At the time of writing the aforementioned deliverable, we already conjectured that it is both necessary and possible to decouple functional and software technological aspects in a much more systematic manner, and all experience and feedback gained since then confirms our conjecture. One of the major reasons for the often confusing description and discussion of architectures is that different aspects of architectures are addressed and not cleanly separated. In our work, we could meanwhile establish a clear connection of the different architecture aspects with respective phases in the robot application development process. We identified six different aspects of ar- chitecture, which help to clarify the discussion of architectures. We also defined and described a comprehensive development process for building robot applications, and each of the architecture aspects has a clear role in the development process. During BRICS, we looked intensively into the architecture problem and gained a lot of in- sights that will hopefully help the robotics community to lead more coherent and structured discussions on robot architectures. One insight, gained already during proposal writing, is that "standardized" architectures are currently an unrealistic dream for robotics applications that combine state-of-the-art actuation, sensing, processing, and cognition. In this situation, it does not make sense to design a robot control architecture workbench that tries to provide such stan- dardized architectures. Instead, we proposed to identify both common elements and repeating patterns in robot architectures. Our work concentrated then on providing the mechanisms nec- essary to provide the common elements in generic, reusable ways, and to develop languages to describe the repeating patterns. This work is much more general and extensive than what was initially foreseen, and we do not consider it completed yet; any claim on the completeness of the identified patterns, or even on the coverage of certain areas would be premature due to number and size of existing architectures that need to be investigated and the limited resources we had available. Also, the quality of tool support critically depends on feedback from practical use, and it is clear that there is still a lot of room for improvement. However, some solid foundations have been laid.

Revision 1.0 3 c 2013 by BRICS Team at BRSU 1.3. Summary of Major Results Chapter 1. Introduction

1.3 Summary of Major Results

The major contributions of WP2 over project lifetime can be summarized by the following list:

• Comprehensive survey of relevant best practice (see D-2.1)

• Specification of architecture and interfaces for robot control architecture workbench and BROCRE, the BRICS Open Code Repository (see D-2.2)

• Identification and description of relevant architecture aspects (see Chapter2)

• Definition and description of BRICS RAP, a coherent, holistic application development pro- cess for robotics, and relating the different architecture aspects to steps in the development process (see Chapter 3)

• Identification of architectural patterns commonly used in robotics and their description in a suitable way (see Section 4)

• Pursuance of a layered software architecture approach, where the lower three layers address the previously identified problems of hardware heterogeneity, distribution, and software heterogeneity in the following manner:

– The object-oriented device layer (OODL) hides vendor-specific interfaces of devices and non-object-oriented interfaces of legacy software by a clean layer of object-oriented classes. – The network-transparent services layer (NTSL) provides network transparency by wrapping OODL objects with middleware services. This is performed automatically during system deployment. – The component layer harmonizes the software elements further by providing coher- ent and well-defined means for structuring components (5C principle), configuring components, and deploying components.

• Definition and provision of concepts, methods, and tools for modeling robot architectures, especially for the BRICS Component Model

• some additional items to be added

1.4 Overview on the Report

The remainder of the report is structured as follows: Section2 presents a hierarchical structure of architecture aspects considered relevant for BRICS. Section3 reports on the latest version of BRICS RAP, the BRICS Robot Application development Process and its relationships with the architecture aspects. In Section4 we discuss architectural patterns. Section5 covers work perform towards modeling architectures, while Section6 reports on work that helps to actually implement architectures. Section ?? summarizes and concludes.

c 2013 by BRICS Team at BRSU 4 Revision 1.0 Chapter 2

Aspects of Robot Architectures

2.1 On Architectures in Robotics

Various research activities in the past few years1 have made evident the fact that the robotics community currently cannot make any widely agreed upon statements on the “architecture” topic. While the architecture of a robot had often been an integral part of a scientific publication until the late 80s, the strong focus on methods in the last two decades has almost wiped out the architectural debate. Whether it is a result of this negligence of architecture or not, it remains a fact that practically every robot application developed in the past 20 years adopts a different architecture. This might still be acceptable, if each architecture would have been designed for a significantly different application. This is unfortunately not the case. This situation makes application development in robotics time-consuming and expensive. The architecture debate has been helped by a considerable confusion about the notion of architecture (see e.g. [165][11][16][17][22][31][35][49][143][51][60][68][70][76][84][85][88] [95][98][104][105][112][130][133][146][147][155][168][174][179][180]). In order to avoid such a confusion within BRICS, we defined six different architectural aspects (see Section 2.2) that we could identify in typical non-trivial robot application systems. An essential distinction that will be made is between a functional architecture and a software component architecture. Control is a central aspect on both levels. In practice, these different aspects are often not separated. A supposedly “simpler” architecture, however, often makes things actually more complex and easily leads to confusion. We believe that making this distinction is absolutely necessary in order to make progress and speed up robot application development. The situation can be compared e.g. with GUI programming. When the first GUI programming environments arose, they often differed signifi- cantly, and programmers applied widely different software designs to build applications. Many of those were riddled by stability problems and frequent failures. Nowadays, there is a more or less established set of design rules for GUI-based applications — not only for the layout of graphical user interface elements, but also for the program design behind it — which make the design of such GUI-based applications much more coherent. If these design rules are followed, a developer well-versed in these techniques can quickly understand an application developed by others. Thus, these design rules are independent of the functionality of the target software system. The robotics community should target at a similar level of functional independence, i.e. the software architecture, its implementation in source code, and the organization and management of this source code needs to much more independent of the functionality of the robot application, and needs to adhere much more to commonly agreed-upon, (domain-specific) design guidelines.

1See e.g. various workshops organized by the RoSta project [165] or the SDIR workshop series by Davide Brugali.

5 2.2. Architectural Aspects Chapter 2. Aspects of Robot Architectures

2.2 Architectural Aspects

System Architecture

Software Hardware Architecture Architecture

Functional Component Runtime Computational Electrical Mechanical Architecture Architecture Architecture Architecture Architecture Architecture

implemented by mapped to executed by controls actuates

Figure 2.1: The architecture aspects used in BRICS.

For a complex service robot, we distinguish six aspects of architecture (see Figure 2.1). The first three aspects are related to hardware aspects, and together can be considered as defining the hardware architecture:

• The mechanical architecture aspect describes the robot system in all its mechanical aspects. This should include CAD models of each component, how and where they are connected to each other during assembly, and various information about the mechanical components or the overall system that may be of relevance for software development. For example, the color of system parts is of interest to detect when robot parts get into the view field of the robot’s own perception system. The weight of components and of the overall system, and frictional coefficients for the wheels may be of interest when computing the dynamics. And coordinate transformations, for example for data from laser range find- ers into a robot-centric coordinate system, could be automatically derived when knowing precisely how the sensor system is mounted on the robot. Needless to say, that the infor- mation of the mechanical architecture aspect is usually part of the documentation provided for technicians who are to perform mechanical maintenance operations on the robot.

• The electrical architecture aspect provides information on all electrical issues of the robot system. It must include all system electromechanical (e.g. actuators), electrical (bat- teries, switches), electronic (sensors, circuit boards), and computational (microcontrollers, embedded PCs, laptops, etc) elememts, and all the wiring (power supply, buses, network lines) between them. This information should be acquired along the way during the ac- tual configuration of the hardware platform. Based on this information for a particular hardware configuration, it is possible to decide whether or not an additional device can be added (e.g. if a sensor needs a USB 2.0 connection, then a free USB 2.0 slot must be available), or it can help to debug difficult problems.

• The computational architecture aspect provides information on the computational devices, e.g. which CPUs and GPUs they feature, how much RAM is available, which operating systems they run, and how the computational devices are networked together. For a small robot with a single embedded PC running all application software this aspect may seem trivial. But for a robot featuring several computational devices, this aspect

c 2013 by BRICS Team at BRSU 6 Revision 1.0 Chapter 2. Aspects of Robot Architectures 2.2. Architectural Aspects

should contain all information of relevance for mapping a large distributed software system onto this computational architecture and getting it to run smoothly.

The remaining three aspects are related to software and can be jointly viewed as the software architecture:

• The functional architecture aspect focuses on functionality rather than software is- sues. It should identify major functional components, like speech recognition, manipulator control, path planning, etc. and how they interact in order to solve certain tasks.

• The component architecture aspect concerns all aspects of the actual software im- plementation of the functional architecture, especially software modules and their inter- action, the interfaces of the software modules and the relevant data strcutures. As we use component- based programming, components will serve as the predominant concept for modularization. Composite components allow for hierarchical composition of more complex components.

• Finally, the runtime architecture aspect maps the software architecture onto a partic- ular computational architecture, mainly by mapping software components onto processes and threads, and by mapping processes and threads onto the computational devices avail- able.

More details are provided in the subsequent sections. The mindful reader may wonder why the notion of control is not appearing in the above discussion; after all, “robot control architecture” is a word frequently used in many robotics papers. The main reason is that control is an integral part of each of the three software-related architecture aspects.

Revision 1.0 7 c 2013 by BRICS Team at BRSU 2.2. Architectural Aspects Chapter 2. Aspects of Robot Architectures

c 2013 by BRICS Team at BRSU 8 Revision 1.0 Chapter 3

Architecture Aspects and the Robot Application Development Process

The Robot Application Development Process in BRICS (BRICS RAP or simply RAP) is a holis- tic process model for developing robotics applications in both academic and industrial settings. It combines ideas from traditional software engineering [24][159][132][127], agile software de- velopment [148] [117][54][153][56], model-based engineering [104][144][158][120][27][125], industrial systems engineering [108][30][116][178][97], and industrial project management [?], [?], [?]. BRICS RAP foresees eight different phases (see Figure 3.1). Each of the phases requires

Scenario Functional Platform Capability System System Product Product Building Design Building Building Deployment Benchmarking Deployment Maintenance

Figure 3.1: Phases of the BRICS Robot Application Development Process several steps to complete. A survey of phases and step is given in Figure 3.2, and both phases and steps are explained in the subsections following. In order to fully understand the holistic nature of BRICS RAP, we assume that a customer, who is interested in acquiring a service robot application, either for an industrial or a residential setting, is meeting a representative of a company offering such products. This company will be referred to as system integrator. The system integrator may have a small set of generic products in its product portfolio, which will usually be customized to meet customer needs. The process is similar to how one can order a car: The customer has certain choices of drive systems, optional equipment to be configured into the system, and the overall software controlling the application needs to be adapted to reflect the choices of the configuration process and extended to meet the functional needs of the customer. Researchers and developers are often astonished by the detail and extent of the process, for several reasons. One is that most people in robotics so far have a very narrow perspective on the problem. Many students, for example, initially see only the perspective and neglect or are unaware of the interests by other stakeholders, such as customers, end users, operators, component suppliers, etc. They have not yet obtained experience about the needs of a professionally executed development project in an industrial setting. Another factor is that it is not so common yet in service robotics to look at the complete life cycle of a service robot application, because there are still few successful examples. However, interest in our process model is growing, and has been expressed especially by people working in industrial settings.

9 3.1. Phase 1: Scenario Building Chapter 3. Architecture Aspects and RAP

Phases

Scenario Functional Platform Capability System System Product Product Building Design Building Building Deployment Benchmarking Deployment Maintenance

Steps

Target Hardware (Composite) Scenario Hardware System Stress Platform Platform Component Log Analysis Definition Requirements Packaging Testing Component Configuration Construction Identification

Target Software Deployment Runtime Safety and Scenario Functional Platform System Platform Constraints Architecture Security Generalization Requirements Resources Tuning Configuration Specification Construction Testing Allocation

Scenario Robot System Reliability and Functional Content Maintenance System Simulation Emulation Launch Durability Decomposition Generation Instrumentation Extension Modeling Modeling Management Testing

Target Customer System Skills and Functional System-Level Performance Platform On-Site Acceptance Component Capabilities Validation Testing Testing System Testing Test Definition Testing Testing Testing

Figure 3.2: Overview and phases and steps of the BRICS Robot Application Development Process

3.1 Phase 1: Scenario Building

When customer and system integrator meet at the beginning of a project, the most important issue is to make sure the system integrator gets a good understanding of the problem. This is in complete analogy to starting a (non-trivial) software project. As is the case in classical software engineering, the customer often does not really know what he wants. In robotics in particular, customers often have naive assumptions about what is possible with the current state of the art and what is not, and it is especially difficult to judge where precisely the limits are. These difficulties are reflected by the steps in this phase (see Figure 3.3). After defining some initial scenarios, there are two steps to remedy the problem. Also, there is a step to explicitly define customer acceptance tests at this early stage of the project. Together, these steps constitute development steps elsewhere known as requirements engineering [160][181][25] [10][129], use case modeling [53][20][138], or user story definition [55].

Phase 1: Scenario Building

Steps

Scenario Customer Scenario Scenario Simulation Acceptance Definition Generalization Modeling Test Definition

Figure 3.3: The scenario building phase of the BRICS robot application development process.

c 2013 by BRICS Team at BRSU 10 Revision 1.0 Chapter 3. Architecture Aspects and RAP 3.1. Phase 1: Scenario Building

3.1.1 Step 1-1: Scenario Definition The typical scenarios for the target application have to be described. The description preferably should include all information that is potentially relevant for the design and implementation of the target robot application, including but not exhaustively the environment, in which the application is supposed to operate, the relevant objects, the relevant subjects and their behavior, and the dynamics of the overall environment. Good examples for scenario definitions are provided by the rule books for the RoboCup@Work and RoboCup@Home competitions. Developing more formal descriptions serving as models is not difficult. In particular, the simulation models already built by some teams for e.g.the Gazebo simulator provide some good guidance in this direction.

3.1.2 Step 1-2: Scenario Generalization In order to avoid development of brittle applications [52][121] which break down as soon as one of the usually tacit underlying assumptions is violated, this step focuses on the generalization of the scenarios. This means that the variability of scenario elements (environments, objects, subjects, behaviors, dynamics) must be discussed and described. Note that the variability rooted in the scenarios may induce variability in tasks and functionality in later steps, and subsequently induce the necessity for variability in the product, leading to product lines and the need for variability modeling.1

3.1.3 Step 1-3: Scenario Simulation Modeling A simulation model for the scenarios should be build here. Simulation is considered a helpful instrument for development [83]. The simulation modeling step can be done simultaneously with steps 1 and 2, especially if an interactive simulation model building tool is available, which allows the immediate visualization of the environment model under development. Such a visualization tool can thereby be of great help in the scenario building and generalization process.

3.1.4 Step 1-4: Customer Acceptance Test Definition The range of possible combinations of scenarios and expected services by the target robot ap- plication can quickly grow to dimensions prohibitive for exhaustive testing. The definition of customer acceptance tests [29][119] allows the customer to express relative importance of differ- ent end-user functionalities. This information gives developers directions for devoting attention to particular system functionalities. Aside of acceptance tests made available to developers, the customer should define for each test one or more variations, which should not be made available to developers, in order to test the robustness and variability of the developed solutions.

3.1.5 Notes on Phase 1 Agile Development Aspects: The steps in this phase are expected to be executed repeatedly. Every task, which the final robot application is expected to perform, should be reflected in at least one scenario (user story). During this repetitive process, the steps in this phase do not necessarily need to be executed in each iteration, or in the order presented here. For example, if a scenario involves grasping an object, and the range of the objects that the robot should be able to grasp has already been elicited in the scenario generalization step in a previous iteration, it is not necessary to repeat it again. However, requirements are acquired from people, and people are not always good in thinking of and listing all

1The latter issue has been addressed particularly well by our partner U Bergamo. See Deliverable [?].

Revision 1.0 11 c 2013 by BRICS Team at BRSU 3.2. Phase 2: Functional Design Chapter 3. Architecture Aspects and RAP

requirements right away. Looking at some problem aspect again, possibly from a different problem angle, often reveals extra requirements and helps to gain a more complete picture. Therefore, making it an explicit exercise to consider generalization for each new scenario may help to obtain good coverage of the overall problem at an early stage.

Model-Driven Development Aspects: Three possible models can be identified:

1. a model for a scenario description, 2. a model for a simulation model, and 3. a model for a customer acceptance test.

The scenario description model should foresee a structure of a scenario description, includ- ing for optional elements, and have explicit indicators that need to be looked at during the scenario generalization step.

3.2 Phase 2: Functional Design

The functional design phase consists of various analysis and design steps necessary to specify the hardware and functionality requirements (see Figure 3.4). Coming up with the latter usually requires the execution of several iterations of the functional analysis, decomposition, and design, until functionalities are sufficiently well detailed, specified, and validated such that software components can be designed for it. The functional decomposition performed here (top down) can usually serve quite well as a master plan for the hierarchical composition of components (bottom up).

Phase 2: Functional Design

Steps

Hardware Functional Functional Functional Requirements Requirements Decomposition Validation

Figure 3.4: The platform building phase of the BRICS robot application development process.

3.2.1 Step 2-1: Hardware Requirements

The goal of this step is to derive hardware requirements that follow directly from the scenario descriptions. For example, if the scenario task requires the robot to manipulate an object, a manipulator and a grasping device will be necessary. The hardware requirements should be specified sufficiently general such that they still leave sufficient freedom for a robot hardware developer to consider alternative design solutions, but also sufficiently specific to constrain the design space to promising solution candidates.

c 2013 by BRICS Team at BRSU 12 Revision 1.0 Chapter 3. Architecture Aspects and RAP 3.2. Phase 2: Functional Design

3.2.2 Step 2-2: Functional Requirements

Similar to the previous step, the goal of the functional requirements step is to derive functional requirements that follow directly from the scenario. For example, if the scenario task requires the robot to manipulate an object, it needs to be able to perceive the object, to determine the object position wrt. some coordinate system, to represent such a position in a suitable way, to use the object position to compute a trajectory for the manipulator/gripper bringing it into a state that allows it to grasp the object, etc. Note that despite having presented the hardware requirements step as the first step in functional design phase, we do not explicitly impose a particular order between the two steps described as first and second. A hardware requirement usually imposes some matching functional requirements, but not all hardware requirements may be directly obvious. Hardware requirements may become evident only after having inferred some functional requirements from the scenario description, and the functional requirement imposes certain hardware requirements.

3.2.3 Step 2-3: Functional Decomposition

Deriving high-level functionality from scenario task requirements is not sufficient for software development in robotics. The functionality must be decomposed into suitable sub-functionalities. The decomposition includes information about how these sub.functionalities are interacting in order to jointly produce the requires high-level functionality. The decomposition process must be repeated until functionalities are identified for which methods are known (or new methods can be devised) which implement these functionalities. Note that not all decomposition steps may be done by a single developer or team of developers; some decomposition steps may be deferred in order to include domain specialists later on. However, delaying such designs steps also bears risks and induce e.g. integration problems.

3.2.4 Step 2-4: Functional Validation

The functional validation step is important in order to ensure that the functional design of the target system is as complete as possible, i.e. a careful "mind simulation" of the task in the scenario should neither reveal any additional hardware or functional requirements nor indicate the necessity to further decompose any functionalities in order to make them implementable.

3.2.5 Notes on Phase 2

Agile Development Aspects: If scenario building is performed in an agile manner, iteratively specifying scenario tasks and variations thereof, this agile approach can be applied to functional design as well. Whenever a new scenario variant is produced, the respective functional designs have to be checked whether they already account for the new scenario variation or whether they warrant a modification of the functional design, e.g. by adding new functional components or modifying their interaction. The developers should, however, take care not to stick with initial designs that eventually turn out to not carry over to the demands imposed by growing scenario requirements.

Model-Driven Development Aspects: From an MDE perspective, the whole functional de- sign can be considered as producing a model of the target application. It will usually not be possible to directly generate code from such models, but they are a necessary step towards developing models from which code can be generated. UML2 should provide all necessary means to describe functional designs.

Revision 1.0 13 c 2013 by BRICS Team at BRSU 3.3. Phase 3: Platform Building Chapter 3. Architecture Aspects and RAP

3.3 Phase 3: Platform Building

The platform building phase foresees steps necessary to prepare the actual software development of the target application (see Figure 3.4). The hardware platform needs to be configured, from which the software platform can be derived, which provides the necessary device drivers and various utilities. Furthermore, a robot emulation model is derived to complement the simulation model defined during scenario building. The purpose of this step is to allow software development independently of the actual availability of the hardware platform, which due to manufacturing scheduling constraints often may take several months to build. As in all steps, testing procedures are foreseen as well.

Phase 3: Platform Building

Steps

Hardware Software Robot System Platform Platform Emulation Component Configuration Configuration Modeling Testing

Figure 3.5: The platform building phase of the BRICS robot application development process.

3.3.1 Step 3-1: Hardware Platform Configuration Once a good understanding exists of which tasks the target robot application needs to solve and how the environment and task execution context looks like, a hardware platform can be and should be configured. Currently, it seems unlikely that much sensible software development can take place without knowing what the target platform will be. For the configuration process itself, several approaches are possible. The most viable within the BRICS project seems to assume a small set of generic robot hardware platforms [162], for which a limited number of components can be configured. For an industrial mobile manipulator from KUKA, for example, one might choose between two base variants consisting of the KUKA OmniRob mobile platform [111] with either one or two KUKA LWR lightweight arms [96][110] mounted on it. Other configuration choices may pertain to the computational hardware to be integrated, and the number of sensors (laser range finders, cameras, etc.) to be mounted and their placement on the robot. At least the vast majority of hardware requirements should be inferable from the information acquired in the scenario building phase. It should be no problem, however, to modify the hardware configuration later on, e.g. if the need for an additional sensor arises or an initially selected hardware component needs to be replaced by another one.

3.3.2 Step 3-2: Software Platform Configuration The software platform is the complete software foundation on which the actual robot application software will be built. The software platform consists of

• the operating systems and/or any required firmware for the computational devices config- ured into the hardware (embedded PCs, laptops, microcontroller boards, etc.),

c 2013 by BRICS Team at BRSU 14 Revision 1.0 Chapter 3. Architecture Aspects and RAP 3.3. Phase 3: Platform Building

• the device drivers for all relevant hardware devices, including those for all sensor and actuator devices,

• configuration information that can be derived from the hardware configuration, e.g. coor- dinate translations from a camera or laser range finder coordinate system to the coordinate system of the mobile base,

• utilities and tools for configuring, calibrating, operating, monitoring, logging, and analysing all hardware components,

• software technologies components2 required for operating the robot application software, such as communication middleware, user interface toolkits, or libraries supporting the integration and use of particular hardware devices,

• all the tools and facilities that come with or are necessary for an integrated development environment, including editors, compilers, debuggers, profilers, etc., and providing support for both model-driven development and agile development,

• libraries of best practice algorithms that could potentially be reused in building the appli- cation, and

• libraries of reusable software components of all of the above, usually the result of a com- ponentification process,

The suite of software packages that will make up the software platform is determined by infor- mation from three different sources:

1. Information directly inferred from the hardware configuration. Example: device drivers for sensor components.

2. Information representing customer choices. Example: operating systems to be run on computational hardware, or a particular environment for building the user interface

3. Information following from application developer decisions. Example: a middleware pack- age selected for distributed programming support.

3.3.3 Step 3-3: Robot Emulation Model Definition Simulation plays an important role in BRICS RAP. It not only allows for software development without actual access to the robot hardware (e.g. because the hardware requires time to build, or access to the hardware is restricted or constrained due to resource limitations), but also provides a means to safely test a design in simulated extreme situations or to assess new designs with unknown safety properties. The simulation model built in the scenario building phase needs to be complemented by a robot emulation model to allow for such simulation runs. We prefer the use of robot emulation model vs robot simulation model to emphasize that the simulation should allow to test the targeted robot application software without changes, i.e. only the interfaces to hardware components should be replaced by emulated versions of these components [45][99][26].

3.3.4 Step 3-4: System Component Testing At the end of the platform building phase it should be possible to perform tests on all hardware devices, software components, and development tools that were selected [154][73][33].

2Component is here used in an informal sense, denoting a piece of software that becomes part of robot application or the environment required to execute it.

Revision 1.0 15 c 2013 by BRICS Team at BRSU 3.3. Phase 3: Platform Building Chapter 3. Architecture Aspects and RAP

3.3.5 Notes on Phase 3 Agile Development Aspects: The platform building phase can be executed in a very agile manner, especially if all steps are supported by appropriate tools, that allow fast iterations of the process and performing only small pieces and steps in each iteration. After selecting a mobile base and some default computational device on it in Step 2-1, an initial software platform and an emulation model can be configured and generated in Steps 2-2 and 2- 3. Some mobility tests can be defined, and executed using the emulated robot in the simulated environment, and a manual remote control tool. The next iteration may extent the hardware configuration by adding some sensors, e.g. two laser range finders. Different options for mounting them could be explored by cycling through Steps 2-1 to 2-4 repeatedly. Each iteration is performed with a clear goal in mind (exploring a design option) and yields clear results that are both useful and required by successive steps. The idea of decomposing the development in many small, decoupled steps while always keeping things integrated is one followed closely by the agile development community. The strict test orientation ensures that previously achieved goals and features remain functional despite the many small changes made to the system.

Model-Driven Development Aspects: The platform building phase provides an excellent example of the whole idea of model-driven development: both the software platform and the robot emulation model are generated from a hardware model which is specified in Step 2-1. For the platform building phase, the hardware model defined in Step 2-1 represents the platform model of an MDD process, the software platform generated in Step 2-2 represents the platform-specific model of an MDD process.

After the three initial phases a point is now reached where the actual core software development activity can start. The robot hardware is determined. Even if it is not available yet, we can work with its emulator. All the hardware-related software components are there and tested, as well as a lot of utilities and tools needed for software development. In retrospective, it seems that many development projects in robotics seemed to assume that they start at this point, and that everything their project required up to that point is merely a matter of acquiring a robot platform and installing some development environment. Projects that plan to develop new robot hardware usually tend to underestimate not only the effort for hardware development itself, but especially the effort for providing all the tools and utilities to provide an adequate development environment to software developers. If such projects are funded only for a limited period of time, it is almost a necessity that they eventually fall short of achieving their initial goals, because the delays caused by either the late arrival or continued unavailability of the development platform (as defined here) can never be compensated for later on. Even projects which choose to select an almost complete mobile platform from a vendor and “only” add a few extra sensors or a mobile arm often suffer from this problem, as the software implications of these “small additions” to the hardware platform, lying at the borderline between the hardware platform-oriented developers and software functionality-oriented developers, are easy to underestimate. Even projects who initially opt for a complete vendor-based solution con- sisting of a mobile platform and associated development environments, may run into the problem that the developers find the software environment inadequate and too limited for their purposes, and end up in investing lots of effort into developing a “suitable” development environment. In all of the above scenarios, a lot of re-invention of the wheel is happening; too much in the opinion of many experts. BRICS RAP also targets to remedy this problem, mainly by accepting the fact that an application development process can be made much more efficient, if the right environment and tools are provided to the developers, including support for re-use of software in the form of readily accessible libraries.

c 2013 by BRICS Team at BRSU 16 Revision 1.0 Chapter 3. Architecture Aspects and RAP 3.4. Phase 4: Capability Building

3.4 Phase 4: Capability Building

After completing phases 1 to 3, the stage is set for the actual core software development, which will happen in phases 4 and 5. The BRICS project has decided to adopt the idea of component- based software development [152][136][139][81][21][91]. It seems to best foster software re-use [113], and supports both ideas from model-driven development [104][144][158][120][27][125] and agile software development [148][117][54][153][56].

Phase 4: Capability Building

Steps

(Composite) Deployment Skills and Content Component Constraints Capabilities Generation Construction Specification Testing

Figure 3.6: The capability building phase of the BRICS robot application development process.

The focus of Phase 4 is on building each of the functional capabilities that have been identified and included into the functional design during the functional design phase (see Figure 3.6). As essential major steps, we aim at making hardware devices and algorithms available as re-usable components, as well as composing components to create particular capabilities such as recognizing people, perceiving objects, understanding speech, producing speech, exploring and mapping the environment, planning paths, executing trajectories, grasping objects, making gestures, etc. The focus of Phase 5 is then on composing complete systems/applications from such component capabilities, ensuring that they work together correctly and interact in the desired ways, and deploying them on a particular hardware platform. As components play a central role in both phases, there seems to be significant overlap between them. Although this may be true in terms of the technologies and tool sets used, the difference can be characterized as follows:

• The capability building phases concentrates on developing isolated functionality. It takes a component perspective and provides a piece of software that is designed for integration into larger systems. If the capability component itself is composed of components, these components usually interact in relatively simple ways. Usually, there are no or few resource conflicts on the component level. Any kind of user interaction is considered in an isolated way without assuming a particular context.

• The system deployment phase focuses on developing integrated, system-level functionality. It takes a system perspective and provides a complete system or application, which can run standalone and does not depend upon integration into a larger system. The system deploy- ment phase will usually have to integrate a multitude of capabilities, and this integration may produce many and serious resource conflicts. Resolving these may imply non-trivial modifications to the components or the introduction of specific components for conflict res- olution. Particular care must be taken to integrate and consolidate user interaction such that the overall system behavior is understandable and predictable for the human users and operators.

Revision 1.0 17 c 2013 by BRICS Team at BRSU 3.4. Phase 4: Capability Building Chapter 3. Architecture Aspects and RAP

3.4.1 Step 4-1: (Composite) Component Construction Component-based software development works with components. We view component-based software development as a means to structure and manage large, distributed applications, i.e. pre- dominantly for programming-in-the-large, and object-oriented programming as a means for programming-in-the-small, which should be applied for the implementation of components it- self. Non-object-oriented programming is acceptable only for legacy code and should be avoided as much as possible. This view basically defines the activities to be done in Step 4-1: to turn whatever initial ingredient we get (a device driver, an implementation of an algorithm) into a component. For a hardware device, these activities include

1. installing, configuring, and running whatever vendor-supplied software comes with the device, especially the respective device drivers, any libraries that may be supplied for processing sensor data produced by the device, or any controller that may be supplied for controlling an actuating device,

2. selecting or defining interfaces for the device and data structures used by these interfaces,

3. wrapping the vendor supplied code into usually one object-oriented class (using the inter- faces and data structure previously defined),

4. providing the interfaces as network-transparent services by combining the object-oriented class with middleware functionality, and

5. encapsulating the network-transparent services into a re-usable component.

For an algorithmic component, usually the implementation of a particular functionality, like an object recognition algorithm, these activities include

1. analyzing and understanding the legacy code,

2. refactoring the legacy code or re-implementing the algorithm in an object-oriented fashion, as a set of classes,

3. harmonizing class interfaces and the data structures used by them, and assimilation to abstraction hierarchy,

4. providing the interfaces as network-transparent services by combining the object-oriented classes with middleware functionality, and

5. encapsulating the network-transparent services into re-usable components.

Developers may, of course, also develop components completely from scratch. In this case, there is no legacy code. Developers can directly implement the algorithm using object-oriented classes, interfaces, and data structures, turn these into network-transparent services, and turn these into components. Once the basic functionalities needed for some higher-level functionality are available as components, they can be combined and composed into composite components to implement this higher-level functionality. That is, while we decompose functionality top-down, we compose actual software systems bottom-up in order to provide increasingly more sophisticated skills and capabilities for the robot. In this step composite components are needed [177][86][140][103] [62][175][182][23][15][19][67][102]. We take a recursive view of composite components: Any composite component can be used as a component for building a more complex composite

c 2013 by BRICS Team at BRSU 18 Revision 1.0 Chapter 3. Architecture Aspects and RAP 3.4. Phase 4: Capability Building component. As we have outlined before, none of these components is supposed to be executable as a standalone application and may demand a certain context to be runnable. Remark: The computational aspects incorporated by a component are manifold, the required knowledge and background required for it is deep. There are few programmers out there who master all of low-level systems programming or embedded system programming, design and effi- cient implementation of algorithms from specific domains like computer vision, non-linear control, and task planning, in a state-of-the-art object- oriented programming language, appropriate use of communication middleware, and the implications of programming distributed systems in an equally competent and adequate manner. The componentification process is intentionally struc- tured into several steps, each of which requires different skills and background knowledge, in order to make the process more manageable. We get more steps, but smaller cognitive load in each step.

3.4.2 Step 4-2: Deployment Constraints Specification During the component construction process, developers should neglect distribution issues as far as possible. This does not mean that monolithic code should be produced. However, if structure of the systems and the interaction of their components should reflect the needs of good functional design, not any properties of a (potential or actual) physical computational infrastructure on which the system will eventually be executed. The system design may, however, provide valuable information which may impose or at least suggest certain constraints on the later deployment of the system. If two or more components frequently exchange large amounts of information, thereby inducing high communication payload between them, a good runtime architecture design may want to avoid running these components distributed across a network involving high communication overhead. During or after the design of primitive and composite components such constraints need to be specified and documented for later use in the next process phase.

3.4.3 Step 4-3: Content Creation The content creation step may appear to be unusual. However, many robot applications involve functionalities the constructions or implementations of which often rely on databases (of non- trivial size), which are time-consuming to provide and themselves require some care in their production. Examples of such databases include:

• database of maps of large environments

• knowledge bases for objects to be recognized/manipulated

• knowledge bases for world models

• knowledge base of action and operator models for planners

• knowledge base of methods for hierarchical task network (HTN) planners

• database of speech samples for speaker recognition

• database of speech samples for speech generation

• database of faces for people recognition

• generally, databases of training examples for learning problems

Revision 1.0 19 c 2013 by BRICS Team at BRSU 3.5. Phase 5: System Deployment Chapter 3. Architecture Aspects and RAP

While the type and format of the above data is often easily determined and implied by the choices of algorithms, the mere process of filling the knowledge and data bases can be very time- consuming. The quality of the data must be ensured, and inconsistencies, incompleteness, and incompatibilities often present severe obstacle for project progress. For some of the above examples, current research indicates that such information could possibly be downloaded and updated from the Internet. The EU projects RoboEARTH [164] [18] and RoboHOW [?] are examples for such research.

3.4.4 Step 4-4: Skills and Capabilities Testing This step should be quite straightforward: for all intermediate steps and final outcomes of the activities in this phase tests need to be defined and executed. These tests will be particularly helpful in situations, where the complete system exhibits overall some strange behavior. With all the tests so far, developers can check system functionality bottom up from simple hardware devices to simple or advanced skills and capabilities [154].

3.4.5 Notes on Phase 4 Agile Development Aspects: Phase 4 allows for agile development in many ways. The inde- pendent consideration of numerous individual skills and capabilities allows for concurrent development of small chunks of software in shorter periods of time. All steps in Phase 4 define tasks that can be delineated from each other quite easily. Bottom-up, top-down, inside-out, and incremental refinement are all approaches that easily fit with the activities described. Test-driven development is also possible; in this case one would start with Step 4-4, then iterate through the steps of the phase.

Model-Driven Development Aspects: Model-driven development aspects play an impor- tant role during Phase 4, as they bear substantial potential for accelerating robot ap- plication development. Although the detailed definition and concepts of models in this context are not yet completely settled, there is an agreement that software re-use should be the guideline for further development. Techniques that foster software re-use are the introduction of a component types and abstraction hierarchies not only on the object- oriented class level, but also for interfaces, data types, and components. This generates a drive for abstracting and harmonizing interfaces and standardizing data structures used in these interfaces. A high degree of platform independence, one of the essential goals in model-driven development, can be achieved by exploiting these abstraction hierarchies when defining skills and capabilities as composite components.

3.5 Phase 5: System Deployment

The fifth phase of the development process is concerned with both the functional and software integration of several skills and capabilities into more complex functionality and the building of complete systems. It is probably the least consolidated phase in terms of methods that are well established and agreed upon in the community. This problem was previously coined into the term “1001 architectures for 1000 robots” [131], in order to express the fact, that there are no commonly agreed upon approaches for designing a good functional robot control architecture and its implementation in a software architecture. We assume that a general functional robot control architecture that is suitable for a wide range of applications does not exist. Such a functional architecture can possible be designed for a particular domain, or a particular type of robot application, but may require at least modifi- cations and tuning for each application. Any software engineering process for robotics can only

c 2013 by BRICS Team at BRSU 20 Revision 1.0 Chapter 3. Architecture Aspects and RAP 3.6. Phase 6: System Benchmarking be successful if it allows developers to build customized functional robot control architectures. However, the software development process can be significantly improved, if there are estab- lished means to implement these functional robot control architectures in software architectures. Component-based development has the potential to provide such means [177][86][140][103] [62][175][182]. A complete robot application can be viewed as a (potentially very complex) composite component. Other than an arbitrary composite component, it must be executable as a standalone application. Additional constraints may be defined that distinguish a full-fledged system from other composite components. This phase is therefore primarily concerned with the development of top-level composite components (see Figure 3.7).

Phase 5: System Deployment

Steps

Runtime System System System-Level Architecture Launch Packaging Testing Construction Management

Figure 3.7: The system building phase of the BRICS robot application development process.

3.5.1 Step 5-1: Systems Packaging This step encompasses the composition of composite components from simpler components (in the following called elements) and packaging them into a runnable system,

3.5.2 Step 5-2: Runtime Architecture Construction This step concerns the definition of the runtime architecture by deciding where and how each component needs to be deployed, while observing the deployment constraints defined during component composition

3.5.3 Step 5-3: System Launch Management The developers also need to provide facilities for launch management, system monitoring during operation, bringing the system into defined operational states, e.g. for maintenance, etc.

3.5.4 Step 5-4: System-Level Testing This step tests complex subsystems or the overall robot application on a system level [123][126]. The customer acceptance tests specified in the scenario building phase should be successfully run.

3.6 Phase 6: System Benchmarking

A robot is a potentially dangerous product, and special care must be taken to ensure it is complete, safe, reliable, and performant [115][48][29][101]. Each of the benchmarking steps focuses on a different aspect.

Revision 1.0 21 c 2013 by BRICS Team at BRSU 3.7. Phase 7: Product Deployment Chapter 3. Architecture Aspects and RAP

Phase 6: Benchmarking

Steps

Safety and Reliability and Stress Performance Security Durability Testing Testing Testing Testing

Figure 3.8: The benchmarking phase of the BRICS robot application development process.

3.6.1 Step 6-1: Components, Skills, and System Stress Testing This step focuses on functionality. Tests should ensure that the components provide the requested functionality, including producing the correct results under typical and extreme operating con- ditions (e.g. variations of system load, co-occurrence of events, difficult environment conditions like heat, sun, darkness, water, etc.).

3.6.2 Step 6-2: Safety and Security Checks The focus of this step is on safety and security. Safety is concerned with ensuring that the robot does not harm to subjects or objects. Security is concerned with ensuring that the robot cannot be accessed and abused by malevolent users or other systems.

3.6.3 Step 6-3: Reliability and Durability Assessment Any product must be ensured to be sufficiently reliable and durable before it can be marketed.3 Once the robot application has reached a state where it provides the necessary functionality, developers must also assure its reliability and durability by running long experiments, under potentially extreme conditions.4

3.6.4 Step 6-4: Performance Testing Last but not least, this step checks maximum operation rates for each component, skill, subsys- tem. This step tests what is classically associated with (performance) benchmarking.

3.7 Phase 7: Product Deployment

So far, we have developed the application, and tested it on a development prototype in the lab. It is now time to deploy the robot software application onto the actual target system, and to make sure we can operate and maintain the application later on when it has been delivered to the customer [32][82]. The steps required for this process are illustrated in Figure 3.8. In this phase, we also need to make final decisions about the variants of the application that we want to eventually go into production.

3For example, some researchers who use Hokuyo laser scanners very intensively in somewhat rough operating conditions have recently reported serious wearout issues rendering the sensor defect after just a few months. 4Willow Garage continuously runs tests of their prototypes over several days in a non- air-conditioned container exposed to the California sun. Temperatures inside the container are usually well over 40 degrees Celsius, often

c 2013 by BRICS Team at BRSU 22 Revision 1.0 Chapter 3. Architecture Aspects and RAP 3.7. Phase 7: Product Deployment

Phase 7: Deployment

Steps

Target Target Target Platform Platform Maintenance Platform Component Resources Instrumentation System Identification Allocation Testing

Figure 3.9: The deployment phase of the BRICS robot application development process.

3.7.1 Step 7-1: Target Platform Component Identification

The focus of this phase is on the differences between the system that has been used for devel- opment so far and the system to be deployed to the customer. These differences may include differences in hardware and software. If we assume that developers did a good job on configuring the hardware platform in Phase 3, and that we adapted this configuration during development as additional requirements appeared, how can then hardware differences occur between the development platform and the target deployment platform? We can identify mainly two reasons for such a situation to arise:

1. Development platforms are usually incomplete. A typical example for robotics is that many development platforms are built without a cover or hull, because it has no role in the development process. However, when a hull is finally added to the “finished” robot, it could suddenly behave very differently. The hull may constrain the actuators or affect the sensors in unexpected ways. Thus, a robot application should not be considered finished unless it has been fully assembled, including all (seemingly) non-functional parts, and has been tested in its final configuration.

2. Development systems often have a tendency to be somewhat lavishly with respect to re- sources. A good example are computational resources, i.e. the number of computers, and their sizing, configured into the hardware platform. While this may be okay during de- velopment, and may even speed up development, it usually creates a cost problem for the final product. Therefore, the hardware configuration needs to be critically reviewed and optimized during this step.

The software side of the target platform component identification also has to cover two aspects:

• Modifications to the hardware may induce changes to the software.

• Software components that were only needed during development, e.g. for debugging and profiling purposes, but are not needed during operation, can be removed.

Last but not least it should be mentioned that executable code for the target application will usually be optimized and compiled for speed, while development versions of the code will usually be compiled such that it can be easily debugged. more.

Revision 1.0 23 c 2013 by BRICS Team at BRSU 3.8. Phase 8: Product Maintenance Chapter 3. Architecture Aspects and RAP

3.7.2 Step 7-2: Target Platform Resources Allocation Once both the final hardware and software components have been determined (and potentially differ from the setup available during development), the runtime architecture, i.e. the allocation of software components on the target hardware platform needs to be determined and optimized.

3.7.3 Step 7-3: Maintenance Instrumentation Another important step before delivery of robot application to the customer is its instrumentation for maintenance. This includes taking provisions for logging data that are required or at least helpful for maintenance, and adding and configuring tools for analyzing these data, tools for on- site (re-)calibration of system components, and tools for recording the maintenance operations themselves. Needless to say, access to all documentation, models, and source code for the whole application needs to be accessible during maintenance; the deployed system may itself have this information on local storage, or it must be accessible from a remote code repository.

3.7.4 Step 7-4: Target Platform System Testing As the steps in the deployment phase may incur various modifications to the system, it must undergo another testing phase. This testing phase may include re-running tests defined in various previous phases, but especially if hardware modifications have been performed, the definition of additional tests for particularly the final hardware platform may be warranted.

3.8 Phase 8: Product Maintenance

Little is so far known about this phase, as there are so few successful robot applications out there. The steps suggested here (see Figure 3.10) are inferred from experience in other industrial projects, such as maintenance of a production line, or comparably complex products, such as an upper-class automobile.

Phase 8: Maintenance

Steps

System System On-Site Log Analysis Tuning Extension Testing

Figure 3.10: The maintenance phase of the BRICS robot application development process.

3.8.1 Step 8-1: Log Analysis Complex systems and complex products, which are controlled by computing devices, nowadays keep records during operations. These records include data about intensity and duration of sys- tem use, documentation of various events, like system mode changes, errors that have occurred, etc. These records usually indicate problems during system operation and allow to often allow to infer necessary maintenance activities, such as replacements of parts, the need for re-calibration,

c 2013 by BRICS Team at BRSU 24 Revision 1.0 Chapter 3. Architecture Aspects and RAP 3.8. Phase 8: Product Maintenance etc. The first step during maintenance usually consist of a readout of the recorded data and their analysis.

3.8.2 Step 8-2: System Tuning System tuning is an activity that is typically performed after deployment of a system. Only after the system was operational for some time and information about its actual use has been acquired, it may be possible to apply modifications and improvements that help to avoid failure situations and to optimize the overall system performance.

3.8.3 Step 8-3: System Extension Occasionally, small system extensions need to be performed during the maintenance phase. These extensions may be partially induced by replacement of parts (hardware devices), additional parts (additional sensors mounted to the robot), and extension of system functionality due to new requirements that have arisen during operation.

3.8.4 Step 8-4: On-Site Testing Last but not least, the modified system needs to be tested before normal operation can continue. Due to a variety of constraints (time, space, and equipment available, environment usable, safety and security), the full range of tests that can be performed during development, especially the benchmarking phase, will normally not be executable during maintenance. The range of runnable tests (determined during the deployment phase) should be rich enough to ensure at least safety, security, and reliability. the tests actually executed during this step may be determined by the type and extent of maintenance work performed in the previous steps. For example, if log analysis indicated that the system seemed to have been working well and no serious failures were recorded, and no system tuning and system extensions have been performed, minimal or or even none on-site testing at all may be required. If the maintenance work required modifications involving mounting additional sensors and/or mounting existing sensors at different places on the mobile platform, and significant modifications of the software were necessary, then a suitable range of tests should be run to ensure safe and reliable operation of the system.

Revision 1.0 25 c 2013 by BRICS Team at BRSU 3.8. Phase 8: Product Maintenance Chapter 3. Architecture Aspects and RAP

c 2013 by BRICS Team at BRSU 26 Revision 1.0 Chapter 4

Patterns in Robot Architectures

4.1 Architecture Aspects Revisited

The different architecture aspects relate to the development process as follows:

• The functional architecture of the system is determined in the functional design phase.

• All three aspects of hardware architecture, i.e. mechanical, electrical, and computational architecture, are covered in the platform building phase. Of particular importance in this phase are the models describing the platform. We do not neglect the complexity of the design and implementation of mechatronic com- ponents and systems here. If such components are to be designed from scratch, this would justify a separate development process following standard engineering methods for each of these components. For BRICS, we assume that the mechatronic components are already available, and that the platform building process mainly consists in selecting appropriate components, and configuring them into a system design that satisfies the hardware require- ments derived in the functional design phase, and finally assembling the system and testing its basic functionalities.

• The component architecture is mainly determined in the capability building phase. Ideally, each robot application would eventually consist of a single, albeit large component that is hierarchically composed out of simpler components. This is still difficult to achieve, even when using component-based approaches. Especially robot programming frameworks like Orocos often support only a very flat "hierarchy" and allow to compose a system out of many primitive components.

• The runtime architecture is finally determined during system deployment.

This means that the various architecture aspects are determined in phases 2 to 5 of the de- velopment process, which matches well with the intuitive understanding of most developers. Architecture aspects cannot — and should not — be relevant during the scenario building phase, which should be focused on user needs and should not already (over)constrain later system design by imposing particular architectural decisions. In the benchmarking phase, all architecture aspects should already have been fixed. The benchmarking phase assesses the chosen design wrt. certain criteria, like safety, robustness, per- formance, etc. It remains for future work to investigate to what extent during benchmarking the model-driven engineering approach will help to understand the actual system performance, to identify performance bottlenecks, security and safety risks, and to give guidance wrt. appropriate measures for fixing such problems.

27 4.2. Towards Architectural Patterns Chapter 4. Patterns in Robot Architectures

Similarly, the better understanding and documentation provided by the different architecture aspects and respective models should make the product deployment process more effective, help to more easily customize the system design to run on the most cost-effective product platform, and provide support to more easily realize different product variants while taming development and maintenance costs. We assume that all architecture aspects and the respective models will be an integral part of any robot system maintenance manual. Just as these models help different developers to mutually better understand the system design on different levels, they would also help a maintenance engineer to more quickly track problems, find errors, check performance, or do other maintenance operations.

4.2 Towards Architectural Patterns

Functional reuse in robotics has seen significant growth in recent years with the introduction and growing popularity of component-based frameworks such as ROS [134], Orocos [44] and OpenRTM-aist [12]. Component-based frameworks contribute to reuse in any domain by allowing bundles of specific functionality with known interfaces, the components, to be exchanged and interchanged. However, the concentration of reuse in robotics on functionality only solves one aspect of system design and construction. There is little re-use on the architectural level, which addresses a different class of problems. Such problems include smooth flows of data from sensors to actuators ensuring timely responses, efficient pipelining of algorithmic steps, and organising clusters of components for efficient reuse. An aspect of robotics that makes architectural reuse more difficult is the huge design and configuration space. The environment, the task being performed and even the robot itself can all vary. Identifying the common functional elements within this space is relatively easy: a localisation system is still a localisation system, while a pair of legs is quite clearly not a set of wheels. On the other hand, identifying the common architectural concepts is comparatively difficult. How the data should be provided to the processing components may be similar for a class of tasks but vary depending on the specific sensors available and the quantity of data they produce. The structure of the components that provide control over mobility may be similar from an interface point of view but may vary according to distribution across multiple computing nodes when available, or centralisation when only a single computing node is available. Architectural patterns have been practically applied to provide architectural reuse in various software domains for two decades. The use of patterns is popular enough to have spawned many books cateloguing patterns for various domains, providing software developers with easy refer- ences to solve their design problems. Robot software development should learn from this trend. Common architectural patterns that are useful in robotics must be identified and catelogued such that they can be applied where appropriate. For effective cateloguing, a common pattern schema is necessary. In the following, we propose a schema based on the well-supported existing pattern schemas of other software development domains. A common framework concept, such as the increasingly-popular component-based software concept, aids in both identifying and applying architecture patterns by reducing the differences between implementations of a pattern. Just as other domains have selected framework concepts based on their needs, such as object-orientation in user interface software and communicating objects in network programming, we have selected the component-based approach [42]. The schema presented in the following therefore assumes the use of this approach, which is increasingly popular in robotics, when implementing robot systems.

c 2013 by BRICS Team at BRSU 28 Revision 1.0 Chapter 4. Patterns in Robot Architectures 4.2. Towards Architectural Patterns

4.2.1 On Architectural Patterns in Robotics A software pattern is a specification for a commonly-applicable software structure, specified using textual descriptions and models. Patterns assist in the reuse of architectural concepts and structures. Unlike functional reuse, which focuses on reusing a specific implementation in various architectural styles, patterns aid the design of software structure. A pattern does not provide a ready-to-go solution: software developers must apply and adapt them to their specific problem. Patterns are particularly popular in object-oriented software development, where they were first popularised by Gamma et al. [74]. Specialised collections of patterns have since appeared in domains such as embedded systems [63] and distributed systems [46].

Categorizing Patterns in Robotics Along one axis, patterns in robotics can be loosely grouped into two categories: those that come from other domains and those that are unique to robotics. Just as robotics is an amalgamation of many domains of engineering, developing the software for a robot is an amalgamation of many domains, and thus the patterns from those domains are applicable. The first category is therefore likely to be well-populated. On the other hand, there are probably relatively few of the latter category of unique patterns. More interesting than uniqueness is the application of existing patterns to robotics. Many patterns in common use in other software domains can be applied to robotics in new and inter- esting ways. For example, the Model-View-Controller (MVC) pattern is common in software for desktop computing, where it is used to separate information representation, manipulation and display. When applied in robotics, however, the MVC pattern conceptually matches a common architectural technique where a model of the world is built from sensor data and used to control the robot. Finally, patterns are categorized within a particular domain based on purpose. For example, Gamma et al. divided their patterns into Creational, Structural and Behavioral patterns [74]. Buschmann et al. divide their patterns into finer categories, such as “Adaptable Systems” and “Communication” [46]. Similarly, it would be wise to categorise patterns for robotics (such a cat- egorisation would be orthogonal to the uniqueness categorisation mentioned above). A straight- forward categorisation differentiates by common functionality in robotics such as navigation, manipulation or planning.

Specifying Patterns Patterns are typically specified using a mix of textual and semi-formal graphical descriptions. This is in keeping with the generally loose specification of a pattern that allows a developer to apply it as they see fit. Every description of a pattern provides the same pieces of information. This is often called a “pattern schema.” A schema for patterns may be generic, or it may have pieces of information specific to a software domain to aid description and selection of patterns in that domain. In the case of robotics, the various relevant categories described in the previous section suggest the need for a schema with some customisation. We describe a candidate pattern schema for robotics in in the following.

Sources of Patterns Where should we look to find patterns for robotics? Patterns can be found anywhere that commonly-occurring ideas are found. In the field of robotics, there are several potential sources

Revision 1.0 29 c 2013 by BRICS Team at BRSU 4.2. Towards Architectural Patterns Chapter 4. Patterns in Robot Architectures of patterns that should be investigated such as standardisation efforts as the OMG’s Robot Lo- calisation Service (RLS) or situations where groups of components are commonly used together. In addition, robots used in competitions such as RoboCup1, where the robots are developed for the same purpose may contain patterns.

Related Work on Patterns in Robotics In robotics very few authors have adressed software patterns in their work. An attempt to for- malize architectural choices (patterns) in robotics is described in [124]. Here, Passama et al. developed a domain-specific language with primitives appearing in robotics such as layers, ac- tivities, and knowledge. The language allows developers to express architectural ideas in a more explicit and concise manner. Another language-oriented approach is described in [61]. Dittes and Goericks’ aim is to compare different system architectures in robotics. Therefore, a language called Systematica 2D is introduced which allows to translate system architectures represented and described elsewhere in a common representation. This makes recurring architectural and or- ganizational principles such as closed sensor-actuators loops visible. In [141] component-oriented communication and interaction patterns required in robotics such as dynamic wiring are de- scribed.

4.2.2 Pattern Schema In this section we define a pattern schema suitable for robotics. It is based on the existing pattern schemas described in [74] and [47]. As with pattern descriptions found elsewhere, the focus of this description structure is on describing the practical aspects of the pattern rather than theoretical possibilities. This is evident in the prominence given to implementation concerns, potential consequences in using the pattern, and sample code. Those concerns are in particular relevant in robotics where a huge heterogeneity in programming languages, software frameworks and execution platforms is omnipresent. The presence of a diagram describing the pattern’s structure is also important as it provides robustness to the ambiguities inherent in natural language and independence from any unique features of the language used in the sample code. We consider the “Known uses” entry to be particularly important for robotics. It is important to establish real-world examples of the pattern being used in order to show that it is of practical value to other robot developers. Unlike the established pattern description layouts, we have made ours more general than classes. Given the increasing prevalence of component-based software architectures in robotics [42], it is important that we are also able to describe component-level aspects of design. A detailed description of the schema entries is given below.

• Name: The name of the pattern.

• Intent: The general design problem that the pattern solves.

• Also known as: Aliases for the pattern.

• Motivation: One or more use cases relevant to robotics that motivate the need for the pattern.

• Applicability: Describes in detail where the pattern can be applied, and, often, where it cannot.

• Category: The functional category to which the pattern belongs.

1www.robocup.org

c 2013 by BRICS Team at BRSU 30 Revision 1.0 Chapter 4. Patterns in Robot Architectures 4.2. Towards Architectural Patterns

• Structure: The structure of the pattern (see section 4.2.2) for more details.

• Consequences: The positive and negative consequences of using the pattern. In particu- lar, trade-offs between different non-functional demands such as scalability and reliability are described.

• Implementation: Implementation details about programming language, framework (if any), and other points to consider such as configuration parameters or deployment issues.

• Sample code: A simple example implementation, or a reference to a well-known imple- mentation, to aid in understanding the details of applying the pattern.

• Known uses: Known real-world examples of the pattern in a variety of robots, environ- ments and tasks.

• Related patterns: Similar and complimentary patterns and variants.

Describing Pattern Structure We are interested in structural descriptions of patterns which are independent of one particu- lar component-based robot software framework or specific component model. Even though, we showed (see Shakhimardanov et al. [149]) that the component models of well-known frameworks are similar, it is still challenging to establish a “one size fits all” component model. We pro- pose using the following aspects as guidelines in establishing a description of the structure of a particular pattern.

• The components and composition aspects deals with the questions of which and how many components are involved in the pattern. The role and functionality of each com- ponent must be explained. In addition, the connections between components must be described, indicating how the components interact.

• The communication aspect deals with the type of interaction between components and their directionality. For instance, loosely coupled data-flow architectures where one or more components produce data for one or more consumers. Another example of partic- ular importance for the structural description are more tightly coupled communication mechanisms such as those found in client/server architectures.

• The coordination aspect accounts for the importance of reactivity in robotics such as the emission and reception of notifications and events in general which are used for mode changes.

Quite often it is not important to include all aspects in the structural description simply because it is not important for the scope of the pattern. In general, the structure should be illustrated with a figure.

In the following section, we demonstrate using the pattern schema described above by briefly describing one pattern seen in robotics.

4.2.3 Example: The Pipeline Pattern A common task in robot software development is to structure and organize different algorithmic modules or components in such a manner that they produce task-relevant information. Often raw sensor data, such as 3D point clouds, is sequentially computed through a number of algorithmic steps. The output of one step is the required input for another computation. A common and

Revision 1.0 31 c 2013 by BRICS Team at BRSU 4.2. Towards Architectural Patterns Chapter 4. Patterns in Robot Architectures

Control Start

Initial Intermediate Intermediate Final Sensor XYZ XYZ 123 123 ABC ABC Result Sensor Data Processing Processing Processing Processing Step Step Step Step

Sensor Sensor Data

Figure 4.1: The pipeline pattern with (optionally) several data sources (sensors) and a start/stop mechanism. The pattern consists of an initial processing step, arbitrary intermediate processing steps, and one final processing step to produce the final result. feasible structural pattern is to organize components in a pipeline. The resulting Pipeline pattern, described below, is an example of a well-know pattern [47] applied in robotics in the “traditional” way. • Name: Pipeline • Intent: The goal of the pipeline pattern is to separate functional components which each depend on one other component’s outputs. • Also known as: Processing chain, Pipes and Filters. • Motivation: Sensor data in robotics often goes through multiple processing steps before it is used. The pipeline pattern ensures functional separation between each processing step and consistent interfaces, allowing steps to be easily inserted, removed and replaced. • Applicability: The Pipeline pattern can be applied in any situation where a sequence of steps are performed on some known input to produce some known output, where both the input and output have a known data structure through time and use. • Category: Perception. • Structure: The structure is shown in Figure 4.1. It consists of an initial processing step with an input data port, to receive perceived data, and an optional input event port to start/stop the pipeline. A final processing step is required to provide the result of the pipeline. Between the initial and final processing step we can find an arbitrary number of intermediate processing steps. The interface between each step is generally known, but not necessarily fixed, being dependent on the specific nature of each step. • Consequences: Use of this pattern enables easy insertion, removal and replacement of processing steps. A potential disadvantage is inefficient transfer of data between steps. The developer must take adequate precautions to avoid this. • Implementation: The pattern requires that each processing step’s implementations have a common interface to enable interchangeability. In addition, the pattern requires that the input and output of the pipeline are known in advance and do not change across implementations. As a general rule, components in a pipeline are aperiodic. From a control-flow and performance perspective it is important to consider when the pipeline starts to operate. Sometimes the first component of the pipeline has an input event port which could be used to start/stop the operation of the pipeline. This is usually driven and coordinated by the task in process, making it application-specific and not part of the pattern. In case a start/stop mechanism is used developers must consider issues such as buffering or dropping of the processed data. These policies are application-specific.

c 2013 by BRICS Team at BRSU 32 Revision 1.0 Chapter 4. Patterns in Robot Architectures 4.2. Towards Architectural Patterns

• Sample code: For the source code of a system that uses a variant of this pattern, see [122].

• Known uses: The pattern is very common. For instance, the ROS image pipeline2.

• Related patterns: —

A real-world example of the Pipeline pattern is described in [90]. The pipeline processes 3D point clouds from a Kinect camera with the goal of detecting people regardless of motion or occlusion. The system has been developed in the ROS framework and consists of interchange- able components such as subsampling, segmentation, and classification, all of them separately configured and optimized for performance. Notable for this application is the importance of component deployment. More precisely, whether all components are deployed in one single ROS node or one ROS node per component. In [90] Hegger et al. we showed that for large input data (> 80, 000 points) the multiple-nodes configuration leads to a faster overall computation. This illustrates the benefits of deployment flexibility offered by applying the Pipeline pattern.

2http://www.ros.org/wiki/image_pipeline/

Revision 1.0 33 c 2013 by BRICS Team at BRSU 4.2. Towards Architectural Patterns Chapter 4. Patterns in Robot Architectures

c 2013 by BRICS Team at BRSU 34 Revision 1.0 Chapter 5

Modeling Robot Architectures

5.1 Model-Driven Development and Domain-Specific Languages

BRICS adopted early on the strategy to improve software development in robotics by promoting model-driven development (MDD). As the name suggests, models play a central role in this approach. A basic idea in MDD is to develop first a platform-independent model of a solution, which, with the help a model describing a specific hardware platform, is later turned into a platform-specific model and eventually executable code. This approach has many well-known advantages. One is that the design of the solution is not too early influenced and constrained by hardware details. Another is that the different models focus on different aspects, and thereby help to separate different concerns of system development. Finally, developers can hope to reuse at least the more platform-independent models of their solutions. This chapter is concerned with models that help software development in robotics. These models can be textual or graphical. We first survey various component model concepts, then study in more detail issues concerning distribution, communication, and software connectors. Then we present the BRICS Component Model and several domain-specific languages (DSLs), which address specific architectural aspects in more detail.

5.2 Component Models and Component Model Concepts

A component encapsulates a functionality (e.g. a SLAM algorithm) and restricts the access to that functionality via explicitly defined interfaces [137]. In addition, interfaces are used to define the services on which a component depends in order to provide its functionality. The core idea behind component-oriented programming is to use components as building blocks and to imple- ment an application by composing them from components.1 Albeit component-oriented robot software frameworks foster re-use of functional components, developers of robot applications are still facing a core problem: After having chosen a particular framework, they are locked into it and cannot make use of functionality developed in another framework. As an example, lock-in makes it impossible or at least very difficult to reuse a localization component developed in Orca in an OpenRTM application. So far, component-level interoperability between robotic software frameworks is not well supported. We analysed four component-oriented robotic software frameworks, namely Orocos, ROS, GenoM, and OpenRTM, from a model-level interoperability perspective. The evaluation is based on a step-wise approach introduced in [151]. Interoperability is first considered on a system-level, then refined in detail on the component-level, through analysing a set of software concepts rel- evant for component-oriented programming. Section 5.2.1 formalizes a set of relevant modeling

1See Brugali et al. [41] for an introduction into component-oriented programming for robotics.

35 5.2. CMs and CM Concepts Chapter 5. Modeling Robot Architectures primitives and relationships among them. Section 5.2.2 uses these primitives to assess compo- nent and system models used in the selected frameworks. Section 5.2.3 concludes and discusses implications of this work and future research to achieve interoperability between frameworks both on modeling and implementation levels.

5.2.1 Component Model Concepts We identify two levels of abstraction for analysis and evaluation: systems and system constituents. This reflects a highly relevant distinction of perspectives when talking about reuse in the context of robot software frameworks: Vendors supplying hardware devices for integration in robot sys- tems would like to enable and simplify the use of their products by supplying software that is easy to integrate into an application, however this application may look like. Likewise, robotics re- searcher focusing on particular functional problems like object recognition, object manipulation, or SLAM, like to supply software modules that can be easily integrated into a larger software system when building a robot application. Both groups are mainly, if not exclusively, concerned with developing software that will, hopefully, become constituents of larger systems. In contrary, developers of complete robot applications are faced with the problem to design and implement an appropriate system architecture (both functional and on the software level), devise adequate control models, and implement both by applying suitable software technologies and relevant standards. In order to minimize own development effort, such system integrators prefer to re-use as many pre-existing functionality as possible, providing that they are available in a form which allows them to use them system constituents. Component-based programming seems to be very attractive from both perspectives, as the concept of a component captures the nature of a system constituent very well. As we described earlier, the architecture of software systems can be analysed in terms of three aspects: • A functional aspect, which (iteratively) decomposes the overall system functionality into smaller functional elements and their relationships and interactions, until elements are identified that can be concretely described and implemented.

• A software component aspect, which focuses on which components constitute a system, which types these components may be, which components are connected to each other, and of which types these connections may be.

• A runtime deployment aspect, which describes how a system consisting of many com- ponents is to be executed on a set of networked computers. Below, we describe a set of software elements and systems concepts which relate primarily to the software component view and will serve as criteria to analyse different software system models. System: A system can be defined as the composition of a set of interacting and interdependent entities [93]. Note, that an entity refers to both components and the connections between them, which represent interactions and inter-dependencies among the components.

Component: Components are the system constituents providing functionality. There are al- most as many definitions of the notion of a component as there are papers about component- based design and development. From a software life-cycle point of view, a component can be a modeling block/class represented in UML in the design phase, a function in the form of C source code during implementation phase, and a running process in the deployment phase. In the context of robotics software frameworks, a component often represents an encapsulation of robot functionality (e.g. access to a hardware device, a simulation tool, a functional library) which helps to introduce structure. Other possible responsibilities

c 2013 by BRICS Team at BRSU 36 Revision 1.0 Chapter 5. Modeling Robot Architectures 5.2. CMs and CM Concepts

include achieving code-level or framework-level interoperability and re-usability, and being composed with other software by third parties [161, 176, 57]. In our work, a component is represented as a block (black box) which defines the boundaries of particular functional- ity the robot provides. A component can consist of many fine-grained primitives such as classes, functions, etc. discussion of components usually requires further concepts, some of which we discuss below.

Port: Components need to interact with other components in their environment. The primitives making this interaction possible include ports, interfaces, data types, and connections. The latter three will be discussed below. A port is the software equivalent to the concept of a connector in hardware, and are a component’s communication end-points for its connections to other components. Ports play an important role in component-based design. While in object-oriented programming a class usually provides a single public interface, which can actually be used by any entity that obtains an object reference to an instance of the class, the use of ports allows developers to provide several functionally different interfaces and to constrain their use to well-defined entities that will be connected to a port (see Connections below). Ports can be typed. The port type may impose constraints on which type of connection may be associated with it. For example, the connection may be required to use particular communication protocols or synchronization mechanisms. Two types of ports are frequently needed in robotics:

• Data Flow Port: A data flow port is used in situations where there is single supplier providing data in regular intervals to one or more consumers. An example is a compo- nent which encapsulates a laser scanner device and sends laser scans every 20 msec to anyone connected to it via such a data flow port. Syntactically, the port has a name (e.g. scan2D, position2D) and an interface for reading and writing data. Via this in- terface, the port can only communicate information with data semantics to and from other components’ ports; the interaction is supposed to not directly influence control flow on both the sender and the receiver side, and mechanisms for synchronization or advanced handling of communication errors are not foreseen. • Service Port: Although important for robotics, data flow ports are not sufficient to build sophisticated robot control architectures. For instance, modifying a component’s configuration or coordinating its activity via a data flow port would be difficult, require extra effort, and lead to suboptimal designs. Therefore, a component model should feature a port type with control flow semantics. Syntactically, the port has a name and an interface made up of a collection of methods or functions, referred to as services.

ports and service ports are usually associated with different interaction patterns. While ser- vice ports usually imply synchronous interaction between components with clearly assignable client and server roles, data flow ports usually imply asynchronous interaction between components with clearly identifiable publisher and subscriber roles.

Interface: An interface is a set of operations made available to the outside by a software entity. An interface is usually defined by a set of method signatures.

Data Type: The classification of data which is communicated between components of the sys- tem is done through data types. Both the arguments and return values for the function- s/methods specified in the interfaces used in component ports need to be agreed upon in order to ensure correct representation and interpretation of the communicated data, espe- cially if the two connected components eventually reside on different computers running

Revision 1.0 37 c 2013 by BRICS Team at BRSU 5.2. CMs and CM Concepts Chapter 5. Modeling Robot Architectures

different operating systems, and are implemented in different programming languages. Au- tomatic translation or conversion of data types across languages and systems can be difficult or even impossible if incompatible data types are used. Interoperability can be fostered by providing a standardized library of domain-specific data types, which should be designed to minimize or avoid such incompatibilities.

Connection: Connections provide the actual wiring between ports of different components. That is, while a port is a component-level mechanism to make a particular component interface available to the outside, connections perform the linking between ports. With this role, connections are the concept suitable to encapsulate any details about commu- nication protocols and synchronization. This is in line with the definition in [118], where connections mediate interactions among components. That is, they establish the rules that govern component interaction and specify any auxiliary mechanisms required. From an implementation perspective, connections may be realized as simple as memory access or a UNIX pipe, or as sophisticated as TAO, ICE, ZeroMQ middleware runtimes and their respective interaction patterns. For instance, publisher/subscriber, client/server, peer–to– peer are most common interaction patterns. From a modeling perspective, a connection is a directed link connecting two ports.

These essential concepts are used as common denominator to analyse the component models defined in common robot software frameworks. The analysis aims to identify commonalities and differences, and to eventually estimate the effort required to make these frameworks interoperable both on modeling and implementation levels.

5.2.2 Component Models Concepts in Robot Programming Frameworks The assessment approach adopts several features from the benchmarking and software architec- ture evaluation methods. By combining them we cover the evaluation process from both practical and theoretical perspectives. Figure 5.1 provides a simple view of the evaluation method. This procedure can be considered as a top-down approach, moving from general to more specific sys- tem aspects. We identified four stages for this procedure. The output of each stage serves as an input to the proceeding one, thus narrowing the problem to several specific operational situations. Please refer to [151] for further details. With respect to interoperability among frameworks, the second stage consists of identifying a set of refinements directly influencing this quality attribute. These were explained in section 5.2.1. Below we detail assessment results for the second step and draw conclusions for steps three and four which will focus on implementation level (compile and runtime) interoperability.

OpenRTM OpenRTM (version 1.0) is component-oriented software developed by AIST in Japan. It is an open-source implementation of OMG Robot Technology Component specification [167]. Open- RTM relies on omniORB CORBA implementation for its communication infrastructure.

• Component: OpenRTM defines the component in terms of two parts. A functional part of the component, called core logic, contains functional algorithms and is structurally represented by a class as in OOP. A non-functional part of the component, called wrapper or component skin, provides means to expose functionality of the core logic by attaching platform resources to it.

• Port: OpenRTM components can interact with other components through ports. Open- RTM components include ports as stand-alone construct. OpenRTM allows to define a

c 2013 by BRICS Team at BRSU 38 Revision 1.0 Chapter 5. Modeling Robot Architectures 5.2. CMs and CM Concepts

Figure 5.1: Different steps of the analysis and assessment.

polarity (required, provided) for ports. There are two types of ports. A data port, as its name suggests, is semantically equivalent to the data-flow port primitive de- fined in Section 5.2.1. Data ports are unidirectional and used to transfer data using a publisher/subscriber protocol. Components that have only data ports for interaction are referred to as data flow components in OpenRTM. The second type of port defined in OpenRTM is the service port. It relies on CORBA‘s interface description language (IDL) for the specification and implementation and allows components to interact with RPC semantics.

• Data Types: OpenRTM relies on primitive types as defined by the CORBA IDL. There is no library of robotics-specific data types (e.g. forceVector, positionVector2D etc).

• Connection: On the model level, OpenRTM uses the concept of connector to real- ize inter-component interactions. Connectors are specified through their connector profiles which contain a connector name, an id, the ports it is connecting, and ad- ditional attributes. These connector profiles are implicitly deduced from the connections between components and the ports used for these connections.

GenoM GenoM (ver 2.0) has been developed at LAAS CNRS robotics group. It relies on a shared memory approach for communication among modules. One of the distinct features of GenoM is that it was the first to apply a model-driven code generation process in robotics software [71]. GenoM module’s structure, behavior, and resources are defined in generic way by the GenoM module description language. The GenoM tool parses this module description to generate compilable code [7, 114,9].

• Component: In GenoM, a component is referred to as module. Like in OpenRTM, GenoM components can also be decoupled into functional and non-functional parts. A component developer is required to implement only the functional part of the module, which is represented by a set of so-called codels. Codels are defined as non-preemptable, atomic code units, usually in the form of C functions. The non-functional part of the module is auto-generated by the GenoM code generator tool.

• Port: The concept of data flow port does not exist in its given interpretation in GenoM modules. In order to exchange data, GenoM modules use posters, which are sections of

Revision 1.0 39 c 2013 by BRICS Team at BRSU 5.2. CMs and CM Concepts Chapter 5. Modeling Robot Architectures

shared memory. There are two kinds of posters. Functional posters contain data shared between modules (e.g. sensor data). Something semantically equivalent to service ports are control posters. Control posters contain information on the state of the module, running services and activities, client IDs, thread periods, etc. But a compo- nent cannot write/send to control posters of another component. Therefore control posters do not provide the same functionality as service ports in Section 5.2.1.

• Data Types: GenoM does not provide a robotics-specific library of data types, but sup- ports the communication of both simple and complex data types as they are defined in the C programming language.

• Connection: Connections do not exist as a separate entity. The developer needs to specify in a model description file for each model which shared memory section it needs to have access to. On the request/reply interface level, connections are set up through a Tcl2 interpreter, which plays the role of an application server to all running modules. Developers write Tcl scripts in which they define the module connections. This is very similar to the Player approach, where the role of the Tcl interpreter is taken by the Player device server.

ROS The open-source Robot Operating System (ROS, version ’boxturtle’) developed by WillowGarage aims to provide a software development environment for robotics. The underlying analogy of this robot software framework is that of an operating system, including package management, inter-process communication, and software development tools.

• Component: The main computational entity in ROS is called a node.A node is a process which provides some specific functionality. Thus, a node can be considered as the ROS equivalent of a component as exemplified in Section 5.2.1.

• Port: ROS features the concepts of topics and messages, which effectively implement an equivalent to a data flow port with publish/subscribe interaction pattern. A topic can be considered as a named communication channel which is used to send and receive messages between nodes in an anonymous manner. This kind of interaction helps to decouple nodes and achieve more fault-tolerant systems. Synchronous interaction be- tween nodes in ROS is achieved through the concept of services. In contrast to the traditional understanding of service ports, services are not groupable through service ports in ROS. However, as nodes and messages, services are encapsulated through a hierarchical naming structure (called names), leading to decreased naming conflicts and a more structured development of complex applications.

• Data Types: ROS provides a messages description language to specify data structures (called messages). Messages are composed of arbitrarily nested primitive data types (as integers, float, etc.). These messages are used as a mean for communication between nodes through topics and services. In addition, ready-to-use robot-specific data types as geometry and pose messages are available.

• Connection: The concept of a connection does not exist in ROS. Location-transparency between nodes is achieved through the concept of a master node. The master node provides naming and registration facilities for all nodes. However, the parametrization of the communication link between nodes (e.g. the size of the queue) is performed in the nodes itself.

2Tcl = Tool Command Language

c 2013 by BRICS Team at BRSU 40 Revision 1.0 Chapter 5. Modeling Robot Architectures 5.2. CMs and CM Concepts

Orocos Orocos (ver 1.8) is developed at robotics group of Katholieke Universiteit Leuven. It focuses on developing a general purpose modular software framework for robot and machine control. The framework provides basically a set of libraries, among which the Real Time Toolkit (RTT) library delivers infrastructure and functionality to build component based systems [157].

• Component: The RTT explicitly defines a component primitive. Conceptually, the RTT component is similar to OpenRTM components. A component’s functional core is decou- pled from the part responsible for platform resources. The component implements five dif- ferent types of interaction endpoints. They differ according to the information semantics (data, event, property end-points) and synchronization mechanisms (command, method end-points). In practice, it does not seem to be so easy to clearly determine the right type of port, and the developers are recently re-thinking their approach in this regard.

• Port: Orocos components use data flow ports as thread-safe data transport mechanism for (un)buffered communication of data among components. Data port-based exchange is asynchronous/non-blocking. In Orocos methods behave similar to service ports as defined in Section 5.2.1.

• Data Types: Orocos provides a set of predefined standard data types for robotics. De- velopers can also create custom data types.

• Connection: Orocos provides an explicit connection concept between components.

5.2.3 Summary on Component Model Concepts The results of our analysis can be summarized in Table 5.1:

Comp. Data Service Data Conn. Flow Port Types Port OpenRTM X X X X X Genom X — — X — ROS X X — X — Orocos X X X X X Table 5.1: Overview on component modeling primitives in different robot software systems.

The results of the assessment in Table 5.1 show that there is not only a zoo of robot software frameworks, but also a zoo of different component models. However, most of these component models adopt quite similar concepts for components and component-related concepts, e.g. data flow ports and service ports. Most component models lack an explicit concept of connec- tions. Connections seem to be a promising alley to follow in order to explore interoperability concerns. A library of well-defined, standardized data types for the robotics domain, designed with expressiveness, performance, and robustness, but also communication and interoperability issues in mind, seems more than overdue. Also, developing a common ontology of concepts for the robotics domain appears to hold the potential for identifying many similarities between differently names concepts and ideas, for streamlining much of the development work invested by the community, and for greatly simplifying the currently complex world of a robot software developer. Although all the analysed component software has common features and attributes, there is no systematic approach to reuse software (models and code) across the systems. Observing

Revision 1.0 41 c 2013 by BRICS Team at BRSU 5.3. Dist+Comm in CMs Chapter 5. Modeling Robot Architectures current trends in robotics software development, it is realistic to expect that the number of new software packages will grow in the future. This situation is similar to the operating systems domain some time ago, when there were a handful of systems which then grew in number. Most of those systems eventually provided some means for interoperability among each other. A similar approach should be taken in the robot software domain, since there is an abundance of robot software systems and component models with largely the same functionalities out there. At the same time, there is no way to persuade people to use The Grand Unifying Solution, and the best approach to make progress is to achieve interoperability between existing systems on different levels, i.e. on model level, code level, etc.

5.3 Distribution and Communication in Component Models

In this section, we look a bit deeper into distribution and communication aspects in component models.

5.3.1 Motivation In general, the core idea behind component-oriented programming is to re-use components as “building blocks" and to implement an application by composing them from components. An example component-oriented robotic application is shown in Figure 5.2. In [150] we showed that many robotic software frameworks share common primitives manifested in software con- cepts such as: components, interfaces, ports and connections. Further, Biggs et al. [28] showed that through the similarity of these concepts interoperability between robotic software frameworks is feasible. It validates also the fact that there is already enough core robotic func- tionality and technology available to develop sophisticated robotic applications and to fulfill functional requirements. However, this is not true with respect to non-functional requirements or quality attributes of robotic applications as scalability, robustness, and maintainability. These requirements are becoming more and more important when the functional application needs to be deployed and integrated on a distributed and heterogeneous system. . For such settings, system building and integration becomes a major challenge. This follows from several factors, which we call points of variation of the whole system. The points of variation identify those parts of the system which need to be made configurable. For instance, the transport protocol to be used in the component network, the periodicity of components, or the communication rate supported by the framework. Very often these configuration parameters and settings are neither explicitly available nor configurable for the robot software integrator, even though these settings are crucial to achieve non-functional requirements. Below, we introduce the protocol stack view (PSV) (see Section 5.3.5). The PSV is a systematic but pragmatic model to identify the points of variations in robot software and to assess quality attributes of distributed and heterogeneous systems. It is further used to design scalability experiments with two software packages, the ROS software framework and the ZeroMQ 3 communication library (see Section 5.3.7).

5.3.2 A Use Case Large-scale service robots performing challenging tasks in domestic environments (e.g., grasp- ing arbitrary objects) need to integrate a large set of software components. Figure 5.2 shows a simplified version of the component network on our Care-O-bot 3 robot, which we used for participation in the RoboCup@Home competition. In RoboCup@Home a robot is required to perform a multitude of tasks, like detecting and recognizing people, guiding or following them,

3http://www.zeromq.org/

c 2013 by BRICS Team at BRSU 42 Revision 1.0 Chapter 5. Modeling Robot Architectures 5.3. Dist+Comm in CMs

Figure 5.2: A component network (ROS nodes) on a Care-O-bot 3 robot performing tasks in a domestic environment. detecting, recognizing and localizing objects in the environment, and manipulating such objects, e.g., in order to serve drinks to the inhabitants of a home environment (e.g., a living room). Every box in the figure denotes a component (in this particular case: a ROS node). The compo- nent network is deployed on two different computational nodes (visualized as gray background boxes). In total, the network consists of 24 components where 7 components provide perceptual information (green boxes) and 3 components provide access to actuators (red boxes) such as the arm, the hand, and the base. The other components are processing components performing functionalities such as segmenting planes from 3d point clouds and planning. The interaction between the components follows a publish/subscribe pattern. The network data flow is visualized by directed edges, where the frequency of data publishing is also visualized: red solid lines denote frequencies above 10Hz, red dashed lines denote frequencies between 1 − 10Hz, and black dot- ted lines denote occasional aperiodic communication between components (e.g., notifications of environmental events). We classify the components into three categories: publishers (e.g., sensor components), brokers (e.g., the plane segmentation component which both subscribes to data and publishes data), and subscribers (e.g., actuator components). The configuration depicted in Figure 5.2 allows our robot to perform the task of serving a drink. However, because systems evolve over time and more components need to be added in order to provide additional function- ality for different tasks, robot systems integrators face scalability requirements. The scalability requirements are directly influenced through the points of variations, e.g., the insertion of more components into the network, changes in the topology of the network like increased input and output connections, increased load on the network through larger messages, or a modified de-

Revision 1.0 43 c 2013 by BRICS Team at BRSU 5.3. Dist+Comm in CMs Chapter 5. Modeling Robot Architectures ployment setting. The impact of these variations might be negative, e.g., parts of the system do not behave anymore as expected, messages getting lost, connections between components become unreliable, and the whole systems slows down. If this is the case we can say that a system does not scale. In such a case a robotics system integrator must analyze the system from a non-functional viewpoint. The architecture shown in Figure 5.2 however does not assist the analysis because it depicts only the functional view on a system: the input and output relations between functional components. Whether the architecture directly maps physically to the system or not is rather unclear. An infrastructural view of the system which allows to analyze a system (e.g., finding communication bottlenecks) and which provides support to inspect and configure the points of variations (e.g., whether a link between components is a buffered queue on a local machine or not) is required.

5.3.3 Concepts As exemplified in Section 5.3.2 on a system level any application can be viewed in terms of two main elements: components which provide or require some functionality and connectors which enable that some functionality can be distributed among the components. Very often, we are not interested in the implementation details of components and connectors. However, as the complexity of integration and maintenance increases, it is important, in particular for the system integrator, to be aware of the implementation details. Beforehand, we will briefly describe the core concepts required for the following discussion.

• Components are the system constituents providing functionality. In the context of robotics software frameworks, a component often represents an encapsula- tion of robot functionality (e.g., access to a hardware device, a simulation tool, a functional library) which helps to introduce structure. Moreover, components may contain several ports which are the communication end-points for its connections to other components.

• Connectors provide the actual wiring between ports of different components. More pre- cisely, connections perform the linking between ports. With this role, connections are the concept suitable to encapsulate any details about communication protocols and syn- chronization. This is in line with the definition in [118], where connectors mediate the interactions among components.

There are two main contributors to a communication between interacting components. First contribution comes from the way the connector (or a data channel) is configured and the second contribution is determined by component’s port (or a connection end-point). Below we describe what variation points appear in these contributions and how they are related to each other to achieve an information transfer.

Connector is defined in terms of three types of models. These are related to distribution of participating components, interaction among components and the actual transport which enables transfer of data.

• Distribution model is concerned how components are distributed in a networked en- vironment. In analogy to computer networks, in distribution of software components one can talk of software routers, queuers, brokers, servers, proxies or any other com- ponent which functions as a middleman. These functionalities are often of finding components, routing or queueing data etc. In this regard one can identify the fol- lowing common distribution models. We will not explain functionalities each of the model brings but rather give real-world examples.

c 2013 by BRICS Team at BRSU 44 Revision 1.0 Chapter 5. Modeling Robot Architectures 5.3. Dist+Comm in CMs

– Broker model is one of the common distribution models. It is often met in object- based middleware systems. Its main purpose is to serialize objects and to find objects with the corresponding interfaces. Some example for broker based sys- tems are ORB based middleware frameworks as CORBA [145] and ICE [183] implementations. - Directory or a naming server model is common to systems implementing service discovery. Such systems require that every component in runtime registers with directory service components. Later on, if any other component in the system requires some service, the first thing it does is to connect to directory service component to get list of available services and their locations. In robotics domain, one could name ROS [5,6] systems with its rosmaster directory service. Another example is ORCA2 [34, 37] system which uses ICE naming service for the same purpose. - Proxy model introduces components which act as data or service intermediary between components. Its function could consist of caching often used data or objects (data with methods). It could also serve the purpose of filtering requests between components, e.g. ip or data filtering. - Queuer model is often adopted in message-oriented middleware. It fulfills the same purpose as a broker in ORB based systems. But unlike ORB broker, queuer (also messaging broker) works with message queues. It also provides, finding peers, filtering, routing, serialization and persistence capabilities to queues and messages. A well know standard that uses this distribution model is AMPQ [94], [109]. • Interaction model is concerned with the way individual components exchange data, while maintaining their autonomy. In other words interaction model allows to define roles to every component in a system. There are mainly three interaction paradigm, which could be further described through the kind invocation approach they use. – Publisher-subscriber model in this model a component can be either in the role of publisher or subscriber. Publisher and subscriber are not aware of each others presence and often interact through a middleman component. Thus, this model is also often referred to as implicit invocation model. Here a naming service or a broker component could play the role of middleman. Since components are not aware of each others presence broker should have some kind filtering and routing mechanism to be able to route data correctly. This is often achieved through separating data into topics (named channel) or based on their content. An example for such system is ORCA2 framework which uses IceStorm service from ICE middleware framework [183]. In robotics domain one could list ROS which uses topic based publisher-subscriber model [5,6]. – Peer-to-peer model in this type of interaction there are no specific roles of data consumer or producer. Any component can perform any role. The interaction occurs through explicit invocation. Since there is no clear distinction of roles, all components should be capable of resolving information about other components in order to connect. Such information resolution could take place through a dedicated directory service component or by broadcasting to every component in the network. For instance, in ORCA2 framework similar effect is achieved through IceGrid Registry service of ICE [183]. – Client-server model in this model there are components with distinct roles, which could be either clients or servers but not both. As with peer-to-peer model service invocation has explicit character. In this model, clients should be aware of servers location and services they provide. An example for systems using this model are

Revision 1.0 45 c 2013 by BRICS Team at BRSU 5.3. Dist+Comm in CMs Chapter 5. Modeling Robot Architectures

initial versions of Player framework [4]. • Transport model represents transport protocols which could be used to actually trans- fer data from source to destination. These could be any unicast (e.g. tcp) or multicast (e.g. udp, pgm etc) protocols from OSI ISO transport layer as well as protocols better suited for single host configuration (ipc, inproc, shared memory etc).

As it can be observed an implementation of particular component-component interaction is often very much influenced by the available distribution model. The reason one requires to explicitly separate them is the ability to configure component sub-networks with various requirements on communication. Example, the fact that cartesian space arm controller working in client-server mode should never force one to pursue similar connector model for image pipeline. Therefore, it is crucial to be able to scale component networks through making explicit their distribution and interaction models. Pragmatically speaking, connectors are one side of the story. For components to communicate, one still needs to instantiate and configure primitives which perform the actual connection and session management. These primitives are part of both source and destination components’ end-points or ports.

5.3.4 Communication and Connection Model The meta-model shown in Figure 5.3 summarizes the relevant concepts related to connections of components in component models.

Connection

Connector Port

Distribution Interaction Transport PSV

Client Peer2 Broker Proxy TCP UDP Server Peer

Queuer Pub/Sub RDP

Figure 5.3: An excerpt of the communication and connection meta model.

5.3.5 Software connectors in robotics In the following section, we focus on the concept and implementation of connectors exclusively, since they play a major role in distributed robotic applications. In [118], Nenad et al. defined a classification framework, which attempts to structure common software connectors. Various connectors are discussed from their functional perspective referred to as service categories (e.g., communication or coordination functions) and how this functionality is implemented, referred to as connector types, e.g., through procedure calls, events, or linkages. In general, the idea is to view existing connector technologies in broad, regardless whether it is a distributed setting or a single monolithic program. The connectors are analyzed as standalone

c 2013 by BRICS Team at BRSU 46 Revision 1.0 Chapter 5. Modeling Robot Architectures 5.3. Dist+Comm in CMs entities without semantics of the context they are used in. We think that such analysis should be an integral part of the system analysis with well-placed context information. Therefore, complimentary to the analysis framework introduced by Nenad et al., we introduce a protocol stack view to discuss connectors and their role in distributed robot software system integration.

5.3.6 Protocol Stack View (PSV) The PSV is a general purpose analysis model which can describe design principles behind contemporary distributed software frameworks. One could position the PSV model between the abstract ISO OSI reference model and a specific UML model of software internals. It allows to view a software system from the perspective of combination of existing technologies used to build that system. As it is with the OSI model, PSV can be used to consider a software application to be composed of layers. Each layer can be seen to be built on top of the functionality of the preceding one, at the same time abstracting ‘details’ of implementations. In more detail, each of the layers represents a model or a collection of functionalities (see also Figure 5.4).

Figure 5.4: The Protocol Stack View and its relation to the ISO/OSI network reference model.

• Transportation layer represents a collection of functionalities to transfer data, i.e., byte/bit stream/packets, from one location to another according to some set of rules. The combination of type of data and how these data are transferred is what defines a ‘network protocol’. The TCP and UDP protocols are some of the most common ones.

• Messaging layer represents a collection of functionalities to encode data to be transferred in some format. It also defines rules the way the data should be interpreted on receiving and sending ends. Additionally, some protocols on this layer provide functionalities on session management, i.e., the way inter-application interactions should take place. Some of the notable examples are SOAP, XML-RPC, and HTTP.

• Interface definition layer represents a collection of functionalities to define signatures (names, return type, parameters) of interfaces (in different contexts this has different names, e.g., remote procedure call signature, service name, port name etc, but eventu- ally and pragmatically they all boil down to the same technological concept) through which applications interact with each other. This protocols may also provide functionality to sup- port definition of different communication modes, such as synchronous or asynchronous. Some examples are group of protocols known as interface definition languages such as ICE Slice [183], OMG IDL [145], and XPIDL.

Revision 1.0 47 c 2013 by BRICS Team at BRSU 5.3. Dist+Comm in CMs Chapter 5. Modeling Robot Architectures

Protocol Stack CORBA Web services ROS ORCA View Service Discovery Naming ser- UDDI ROS naming IceGrid Layer vice, IOR, service URL Interface Definition IDL interface WSDL ROS message ICE Slice Layer definition lang. description lang. Messaging Layer Custom se- XML-RPC, XML-RPC ICE serializa- rialization SOAP and custom tion through object serialization references, IIOP Transport Layer TCP, UDP TCP, BEEP, TCP, UDP TCP, UDP HTTP, FTP

Table 5.2: Protocol stack view of some existing technology frameworks.

• Service discovery layer represents a collection of functionalities which allow distributed applications to register their services with some central repository (this is not a require- ment) and to discover other services provided. The common examples for such protocols are UDDI, DNS, mDNS, CORBA Naming Service. These protocols might rely on some other technologies such as URL, URI, URN etc.

The layers introduced in PSV are not just conceptual mechanisms to analyze distributed software frameworks, but also help to identify a collection of specific tools that a particular distributed software framework uses. If there were not all the tools for the appropriate layers available (see Table 5.2), one would have to start thinking in terms of socket libraries, host IPs, port numbers, byte order, thread and connection coordination and many other issues. Additionally, PSV can be used to identify technology or implementation points of variation on each depicted level, but also suggest ways to extend existing monolithic programs to distributed settings. The latter is a very important because it allows users to build complex distributed systems from a combination of existing third party libraries which provide only specific functionalities. For instance, assume a case where one already has a distributed application or general purpose framework, then PSV could be used to understand how the processing/development should take place in an application based on that framework (see Table 5.2).

As mentioned before PSV is a pragmatic model that defines a general guideline in distributed software system design and in particular connectors. The layered approach allows an incremental approach to system developments and helps to cope with the complexity of distributed software. However, though pragmatic may it be, it does not tell one anything about character of connectors. That is, from PSV only, one can not tell the type of the interaction policy (e.g., pub-sub etc), synchronization type or type of the distribution model (e.g., brokered, proxy etc) supported by the connector. These are, of course, part of the specification of a particular technology used on one of the layers defined in PSV. For instance, if one intends to use Slice and ICE middleware combination for all four layers, then connectors of that system would support both synchronous and asynchronous and non-idempotent and idempotent method calls over a publisher-subscriber and a naming service (implemented as a proxy). At the same time, these are also points of variation which are specific to that particular layer and technology choice. In this research we

c 2013 by BRICS Team at BRSU 48 Revision 1.0 Chapter 5. Modeling Robot Architectures 5.3. Dist+Comm in CMs focus on two major points of variation that influence quality of software connectors, distribution and interaction models. In the next section we design a set of experiments to evaluate these points of variations.

5.3.7 Experimental Analysis of Message Latencies In this section, we investigate how the previously described approach to robotics software de- velopment scales. We will do that by analyzing how the communication of messages between components using a publisher/subscriber pattern behaves, if we vary i) the message size, ii) the frequency with which messages need to be communicated, and iii) the number of subscribers.

Experimental Methodology Scalability can rarely, if ever, be measured directly. It is an abstract, non-functional quality attribute and its semantics depends on the context. Scaling up systems usually changes several system parameters, and often tradeoffs are involved. For example, increased number of users able to use the system vs response time, or computational power vs cost of investment. It is therefore quite difficult to perform a systematic assessment of properties like scalability, while taking into account a possibly broad range of factors that influence it. For the identification and evaluation of such factors we use the methodology described in [151]. This procedure follows a top-down approach and consists of four stages.

• Step 1 consists of identifying a non-functional or functional quality attribute for a system or an application under assessment. In our case this target quality attribute is the scalability of distributed robotics software applications, involving fully or partially connected networks of software components.

• Step 2 consists of refining the target quality attribute by identifying concrete factors that influence it. Preferably, the target quality attribute can be related to a set of measur- able system quality attributes, and at least partial knowledge about the correlation of measurable factors and the target quality attribute is available. For assessing the scalability of distributed robotics software applications, such factors in- clude various dimensions of scalability, like the size of the messages communicated, the frequency of message exchange, the size, structure, and topology of the communication network, and the bandwidths available for communication between components. Some of these are not independent of each other; for example, a given bandwidth imposes limits on the achievable combinations of message size and frequency of message exchange. Some factors have several facets themselves, like network structure and topology: network pro- tocol (publish/subscribe), types of components involved (publishers, subscribers, brokers), topology (peer-to-peer, client/server, bus-based), and number of components. Other fac- tors to be taken into consideration will both depend on and result from particular choices for the aforementioned factors, e.g., the latencies for message exchange. Last but not least, we must consider the influence of scaling on other quality attributes, like system stability, robustness, security, and safety. All of these may be compromised e.g., if latencies of mes- sage exchange exceeds certain boundaries and timely system response to the occurrence of critical events cannot be guaranteed any more.

• Step 3 consists of devising an experimental scenario. We use the motivational use case described in 5.3.2.

• Step 4 consists of identifying inputs, outputs, nominal conditions, benchmarking target, benchmarking platform, etc. for the experiment.

Revision 1.0 49 c 2013 by BRICS Team at BRSU 5.3. Dist+Comm in CMs Chapter 5. Modeling Robot Architectures

Intent and Hypotheses For our experiments, our intent is to assess the scalability of a particular approach to building robotics software applications. The scaling dimensions include increased message sizes, increased frequency of message transmission, and increased number of subscribers. The question is how scaling up a robotics software application along these dimensions will affect the overall communi- cation behavior. We measure that by looking at message latencies, i.e., the time passing between starting of the transmission of a message by the publisher and the reception of the message by subscribers. We have the following hypotheses: 1. Longer messages will take longer to transmit. Latency should have a component that scales proportional to message size.

2. Higher frequency of message transmission will not affect latency, unless the overall com- munication load approaches network bandwidth.

3. More subscribers do not affect latencies, or only slightly. Please note, the last hypothesis is one that seems to be assumed by many robotics software application developers. The experiments will show that this assumption is not well justified.

Experiment Design The experiments we designed all involve a publisher/subscriber network with a single publisher. We use the following points of variation (controlled variables): • the protocol stack applied (ZeroMQ, ROS),

• the message size (100 KB, 2 MB),

• the frequency of message transmission (10/30/100 Hz), and

• the number of subscribers (1/10/50). The measured variable is the latency of messages. In order to measure latency, i.e., the time needed for message transmission, the publisher reads the system clock and adds a time stamp to the message just before starting transmission. The subscribers read out the system clock after completely receiving a message and compute the difference to the time stamp included in the message. When publisher and all subscribers run on the same computer, they share a common system clock and no clock synchronization procedure is needed. A single experiment run consists of publishing 1000 messages of given size, with given, frequency, to the given number of subscribers. The individual message latencies are computed by each subscriber and recorded for later analysis. The choice of this setup was made to facilitate implementation of experiments. The principle behind the evaluation remains the same regardless of the deployment settings. Of course, there are different implementation details for different deployment models. Though they may affect the values of measurements, they do not influence its underlying principles.

Experiment Execution The experiments were all performed on a PC with a Intel Core 2 Duo CPU running at 2 GHz, with 4 GB of RAM, running Debian Linux operating system and the version 2.6.32 kernel. The programs for the experiment were written in C++, compiled with GNU C/C++ compiler version 4.5.2, using glibc version 6-2.11.2-7. For the ROS protocol stack, we used version 1.4.0

c 2013 by BRICS Team at BRSU 50 Revision 1.0 Chapter 5. Modeling Robot Architectures 5.3. Dist+Comm in CMs

(diamondback) of ROS. The ZeroMQ stack is based on version 2.9 of ZeroMQ. The measurement procedure for latencies was already described above. Each subscriber collected all latency data and recorded them to file after an experiment run.

Experiment Results The latency data files of all subscribers in an experiment were merged and statistically evaluated. For each run, the minimum and maximum latencies were determined, as well as the mean and the standard deviation over individual latencies for all 1000 messages to all subscribers in the experiment. In addition, values normalized by the number of subscribers and by message size of 100 KB are provided. The results of the experiments are summarized in Table 5.3.

Experiment Analysis The data obtained by performing the experiments allow to draw the following conclusions:

• Hypothesis 1 can be considered proven by comparing the respective table lines for 100 KB vs 2 MB-sized messages in the normalized results columns in Table 5.3.

• Hypothesis 2 can also be considered valid, as long as the total communication load is within the bandwidth of protocol stack used. Comparing the respective lines for the frequencies 10, 30, and 100 Hz, when all other controlled variables are constant, show rather consistent latencies.

• Hypothesis 3 is clearly falsified. Comparing the maximum latencies of experiments with increasing number of subscribers shows an almost linear dependency, while the normal- ized latency values remain comparatively stable. The reason is that for both protocol stacks tested the effort for publishing a message is directly proportional to the number of subscribers. The experiments show that this may not play a major role in system devel- opment for smaller message sizes, lower, frequencies, and small number of subscribers, but the system designers need to be aware what they are doing when adding more components to a working system and densifying the network topology arbitrarily with subscription to publishers of high-volume data.

We can further conclude that obviously there are relevant limits to scalability for robotics software frameworks using the tested protocol stacks.

Conclusions The performed experiments confirmed again that it is difficult to measure a quality attribute ob- jectively. The main reason is that the implementation details of different protocol levels require a careful analysis to cope with the affects of distribution and interaction models. Additionally, it is often difficult to judge whether a particular set of numbers, e.g., round trip time, repre- sents the quality attribute in either positive or negative way. Regardless of these listed issues, the PSV based approach showed to be feasible for the design of the experiments. In future work we will apply the PSV for the assessment of other quality attributes as extensibility or robustness. Moreover, the PSV is also a first step towards a taxonomy in distributed robot software which fosters common understanding of terms among developers. The experimental evaluation demonstrated, that the scalability of distributed robotics software applications faces significant limitations especially when the number of subscribers increases. The effect is more prominent for a very large number of subscribers. The ROS protocol stack is somewhat more

Revision 1.0 51 c 2013 by BRICS Team at BRSU 5.3. Dist+Comm in CMs Chapter 5. Modeling Robot Architectures

System MsgSize Freq #Subs Min Max µ σ NMin NMax Nµ Nσ KB Hz ms ms ms ms ms ms ms ms 1 0.157 0.351 0.157 0.024 0.157 0.351 0.157 0.024 10 10 0.323 3.923 0.758 0.264 0.032 0.392 0.076 0.026 50 0.857 18.800 5.834 2.375 0.017 0.376 0.117 0.047 1 0.149 0.307 0.150 0.095 0.149 0.307 0.150 0.095 100 30 10 0.310 3.935 0.748 0.263 0.031 0.394 0.075 0.026 50 0.273 21.986 5.816 2.405 0.005 0.440 0.116 0.048 1 0.156 0.360 0.156 0.024 0.156 0.360 0.156 0.024 100 10 0.291 4.341 0.756 0.263 0.029 0.434 0.076 0.026 50 0.147 23.848 5.990 2.565 0.003 0.477 0.120 0.051 ZeroMQ 1 2.389 6.890 2.751 0.233 0.119 0.345 0.129 0.012 10 10 2.792 49.562 24.719 12.741 0.014 0.248 0.124 0.064 50 4.943 507.676 169.312 66.412 0.005 0.508 0.169 0.066 1 2.504 3.653 2.788 0.252 0.125 0.183 0.139 0.013 2000 30 10 3.376 88.784 43.037 15.774 0.017 0.444 0.215 0.079 50 10.185 448.539 134.948 55.786 0.010 0.449 0.135 0.055 1 2.495 6.979 2.984 0.323 0.125 0.349 0.149 0.016 100 10 3.210 64.693 31.809 12.734 0.016 0.323 0.159 0.064 50 9.022 624.820 124.695 16.066 0.009 0.625 0.125 0.016 1 0.306 0.560 0.307 0.012 0.306 0.560 0.307 0.012 10 10 0.359 1.871 1.114 0.498 0.036 0.187 0.111 0.050 50 0.568 16.900 5.848 3.116 0.011 0.338 0.117 0.062 1 0.285 0.578 0.301 0.014 0.285 0.578 0.301 0.001 100 30 10 0.346 1.830 1.091 0.490 0.035 0.183 0.109 0.049 50 0.562 15.742 5.796 3.090 0.011 0.315 0.116 0.062 1 0.289 0.563 0.297 0.011 0.289 0.563 0.297 0.011 100 10 0.322 2.644 1.051 0.465 0.032 0.264 0.105 0.047 50 0.571 1,471.966 384.093 245.555 0.011 29.439 7.682 4.911 ROS 1 8.605 10.610 9.507 0.186 0.430 0.531 0.475 0.009 10 10 9.764 95.101 46.792 22.750 0.049 0.476 0.234 0.114 50 32.153 99,451.757 25,503.400 33,595.800 0.032 99.452 25.503 33.596 1 8.655 11.632 9.504 0.162 0.433 0.582 0.475 0.008 2000 30 10 9.260 95.647 46.675 22.524 0.046 0.478 0.233 0.113 50 49.350 85,281.774 35,414.700 31,853.300 0.049 88.252 35.415 31.853 1 8.909 10.631 9.487 0.160 0.445 0.532 0.474 0.008 100 10 9.634 95.942 46.647 22.750 0.048 0.480 0.233 0.114 50 148.813 94,461.712 31,413.800 32,051.000 0.149 94.462 31.414 32.051

Table 5.3: Latencies measured for ZeroMQ and ROS for transmission of 1000 messages of size 100/2000 KB with periodicity 10/30/100 Hz via TCP from one publisher to 1/10/50 subscribers. Experiment parameters are given in first four columns. Timings relate to the one-way latency of transmitting a single message between the publisher and any of its subscribers. The next four columns give the minimum and maximum values, plus the mean and standard deviation for each experiment. The last four columns gives the same values, normalized by the number of subscribers and by a standard message size of 100 KB. effected than the ZeroMQ-based protocol stack. This suggests that it could be worthwhile to replace some of the communication-related technologies currently used in ROS with facilities

c 2013 by BRICS Team at BRSU 52 Revision 1.0 Chapter 5. Modeling Robot Architectures 5.4. DSL for Task and Motion Descriptions provided by ZeroMQ [94]. The results further suggest a promising topic for future research: automatically determining software connector features from a component-based software design and an assignment of components to threads and hosts (deployment), and automatically selecting and configuring the complete protocol stack in order to get optimized communication behavior. Future experimental work is planned to get a more complete picture of the scalability behav- ior of current robotics software frameworks or communication-related technologies applicable in robotics. Points of variations include different protocol stacks, different network topologies, and additional parameter values for the controlled parameters.

5.4 DSL for Task and Motion Descriptions

Consider an assembly application during which a set of physical parts should be assembled into one single physical component, for instance a furniture parts as depicted in figure 5.6b. The goal is to automate this process through the introduction of a dual arm robotic system. The system could be positioned at the conveyor belt, which delivers the parts, and robot picks two parts at a time and puts them together. Let us identify a rough set of tasks which enable completion of this goal task.

• Detect objects and identify their positions

• Identify paths to each object and sample/interpolate them

• Move respective arms to object poses along the interpolated paths

• Detect contacts between end-effectors and objects and pick up each object

• Identify possible paths between objects and sample/interpolate them

• Move each object along the interpolated path

• Detect contact and fit objects together

Figure 5.5: Components of a robot task.

For the robot to execute this set of tasks they should be formalized in a computer readable form. For instance, in most of the industrial robotics platforms there is some form of task programming language, e.g. Kuka Robot Language[3], ABB RAPID[1], and Fanuc KAREL[2], which a user could use to program the tasks. The core idea behind these languages is to specify tasks in terms of robot motions. This intends to facilitate the work of an application developer by relieving him of the need to learn details of a domain and by accelerating application development. Listing 5.1

Revision 1.0 53 c 2013 by BRICS Team at BRSU 5.4. DSL for Task and Motion Descriptions Chapter 5. Modeling Robot Architectures

//variable to store position of object_1, where o1 object_1 pose andw world reference coordinates Pose pose_1; Pose pose_2; Pose pose_ee_1; Pose pose_ee_2;

//getpose returns pose of an object in world reference coordinates pose_1 = getpose(object_1); pose_2 = getpose(object_2); pose_ee_1 = getpose(end_effector_1); pose_ee_2 = getpose(end_effector_2);

//createa global path between obtained poses Path path_1 = createpath(pose_ee_1,pose_1); Path path_2 = createpath(pose_ee_2,pose_2);

//moves end_effectors along the paths move(end_effector_1, path_1); move(end_effector_2, path_2);

Listing 5.1: Example of a task specification language. It defines relations between objects in the scene and actions on these objects.

gives a rough idea how such a task specification could look like. From this listing one can roughly infer what is the semantics of relations/constraints between objects and what actions they should undergo in the scene. Even though a robotic mechanism is part of the scene, as depicted in the figure5.6a, except for the end-effector poses one does not see robot related information in the task description itself. Such information is often transparent to a task programmer, who ideally is not aware of the kinematics and dynamics of the mechanism. The programmer hopes that whenever he writes getpose(end_effector_1) command, robot kinematics related computations will be performed correctly. This should be always valid regardless of the type and structure of the robot. One of the facets of the robot control software is to allow non-experts to develop complex applications without delving into mathematical, control and computer science theory behind the whole process. As seen in listing 5.1, declarative description of each sub-task does not imply anything concerning interpolators, controller and dynamics algorithms or optimization options. In robotics, this level of task description is often referred to as task or planning level specification. One requires a formal representation of these concepts, a language, and a set of tools that can parse and execute this language, in order to make such specification computer readable. In robotics such languages are called planning, task programming or plan automation languages, e.g. KRL[3], Rapid[1], PLEXIL[173]. Figure 5.5 depicts four core elements that are often part of the task specification. In software engineering term such languages are referred to as Domain Specific Languages (an DSL). In our use case, let us assume that there exist such formal descriptions of our assembly task, as well as, the execution infrastructure for the language. As the task program is executed, the execution infrastructure adds further detail to this specification behind the screens and can often transform it into another formal programming language. This process can be continued arbitrary number of times until the desired level of detail is achieved. In many existing robot control software, it is a three step transformation and is performed at run-time. The three step transformation and associated infrastructure is often referred to as planning (task), executive (skill) and reactive (motion) layers of the robot control architecture[107], [8]. These three layers do not only differ in their representations and level of detail, but also have a different set of requirements on quality attributes. This is especially observable when one compares executions on the motion level, such as instantaneous point-to-point Cartesian control of end-effector motion,

c 2013 by BRICS Team at BRSU 54 Revision 1.0 Chapter 5. Modeling Robot Architectures 5.4. DSL for Task and Motion Descriptions

part_1 end_effector_1 end_effector_2 part_2

robot conveyor top

world

(a) Graphical representation of the objects in the scene. (b) Depiction of furniture assembly This is also known as a scene graph. task.

Figure 5.6: Furniture assembly scenario.

and on the task level, move the end-effector from pose A to pose B along the path X. As it can be inferred the time frames during which these executions take place are different. The languages

Task specification languages e.g. IxTeT, Europa Task level

Skill specification languages Skill level e.g. rFSM, OpenPRS, TDL, Plexil

Motion specification languages Motion level e.g. iTaSC, WBC, SoT

Robot interfaces & general purpose Robot level programming languages

Figure 5.7: Relation between task-skill-motion layers and the metametamodel. on each layer attempt to model these requirements with different levels of detail, while still conforming to the same metametamodel figure 5.7. For instance, listing 5.1 consist of mostly domain specific primitives with a minimum number of language rules. But when one tries to convey the same semantics in a general purpose programming language like C++, the number of constructs and language rules increase.

Revision 1.0 55 c 2013 by BRICS Team at BRSU 5.5. DSL for Kinematics and Dynamics ComputationsChapter5. Modeling Robot Architectures

5.5 DSL for Kinematics and Dynamics Computations

In the past, there have been many efforts to use graphs to model robotic mechanisms and define algorithms on top of these models[64]. The problem is that the implementations of these models and algorithms tend to be monolithic and are often accompanied by big APIs [156], [66]. These APIs and their implementations often try either to support as many features as possible or, what is worse, ignore some of them for the sake of lean designs. Unfortunately, uncomposable and inefficient implementations are not results of API design only, but also the way graph-based models are treated in that particular domain. That is, the graph-based models(fig.5.8a ) are not only there to exemplify a number of iterations or dependencies of the model. There are a set of hidden factors, often modeled implicitly, that need to be carefully considered.

(a) Pictorial representation of robotic (b) Graph-based model of a robotic kinematic mechanism. mechanism. Each link is shown together with its computational state.

Figure 5.8: Graph model of a robotic kinematic mechanism.

• concept of domain specific graph properties - one of the primary items to be decided in the graph based model design is which properties of the graph are domain specific and which are not. For instance, the property of connectivity is domain independent, whereas inertial property of each link in the mechanism is not. These properties should be kept separate, in order to increase composability and re-usability of models and implementations. Additionally, one should consider where to store these properties, should they be stored internally with the graph or externally. In case the properties are stored externally, they form a graph of their own and an expert is then required to define the relation between two graphs (the graph that models domain and its property graph).

• concept of mutable and immutable computational states - models are used in the context of some computations, for instance a model of the kinematic chain is used to compute inverse dynamics of that mechanism. These computations cause changes in the model properties. These changes are reflected in the computational state of the model (fig.5.8b).

c 2013 by BRICS Team at BRSU 56 Revision 1.0 Chapter 5. Modeling Robot Architectures5.5. DSL for Kinematics and Dynamics Computations

If the computations change the value of the state, such a state is called mutable, otherwise it is immutable. As in the case of graph properties, the question where to store this computational state arises and the expert must now define the relation between three graphs (domain model graph ↔ property graph ↔ computational state graph).

• pre/post-processing of graph-based model - these are the algorithms which work on topo- logical information, a graph structure, of the model. They are often used as a pre/post- processing step to improve efficiency for possible future computations. One of the common examples that falls in this category, is an extraction of a serial kinematic chain from a kinematic tree structure. The extracted chain could then be used to efficiently compute forward velocity kinematics, for instance.

• scheduling of domain specific run-time computations in graph-based model - often, an im- plementation of graph based algorithms has either recursive or iterative form. These pro- gramming language constructs define the order (schedule) the computations are performed. For instance, in iterative implementations using sub-routines, the implied schedule is se- quential and follows the FIFO computational model, whereas for stack based recursion the implied schedule is still sequential but it takes on the LIFO computational model. The scheduling is the main factor that influences composability of the algorithm (thus leading to the issue of imperative specifications explained in sections ?? and 5.3.1)

1 //Example1: RBDL(rigid body dynamics library) 2 //add body to the model ofa robot 3 body_c_id = twoArmRobotModel->AddBody(body_b_id, Xtrans(Vector3d(0., 1., 0.)), joint_c, body_c); 4 VectorNd Q = VectorNd::Zero (twoArmRobotModel->dof_count); 5 VectorNd QDot = VectorNd::Zero (twoArmRobotModel->dof_count); 6 VectorNd Tau = VectorNd::Zero (twoArmRobotModel->dof_count); 7 VectorNd QDDot = VectorNd::Zero (twoArmRobotModel->dof_count); 8 9 //compute forward dynamics for the robot model 10 ForwardDynamics (*twoArmRobotModel, Q, QDot, Tau, QDDot); 11 12 /*********************************************/ 13 //Example2: KDL(kinematics and dynamics library) example 14 /add segments to a tree mechanism 15 twoArmRobot.addSegment(segment8,"L7"); 16 twoArmRobot.addSegment(segment9,"L8"); 17 18 //define forward position solver and compute pose of the link L7. 19 Frame cartPosition; 20 TreeFkSolverPos_recursive cartPoseSolver(twoArmRobot); 21 cartPoseSolver.JntToCart(q, cartPosition,"L7"); 22 23 //define inverse dynamics solver for the mechanism and obtain 24 constraint joint torques 25 TreeIdSolver_Vereshchagin idSolver(twoBranchTree,"L7", rootAcc,1); 26 idSolver.CartToJnt(q, qDot, qDotDot, alpha, betha, externalNetForce, qTorque);

Listing 5.2: Example of creating a robot mechanism model and its use with some algorithm in RBDL and KDL libraries.

The examples in listing 5.2 exemplify some of the issues discussed above. Listing 5.2 shows code snippets for our dual arm robot system in the furniture assembly scenario. The first half of the example shows the listing using the RBDL library[66] and the second half shows a similar code using the KDL library[156]. As it can be seen, a user first constructs a robot’s model(lines 3-7) then feeds it to the algorithm (line 10), which in this case generates joint accelerations. The problem is, even though the function call (line 10) computes many other quantities which are

Revision 1.0 57 c 2013 by BRICS Team at BRSU 5.5. DSL for Kinematics and Dynamics ComputationsChapter5. Modeling Robot Architectures useful in broader context, these quantities are not accessible. Therefore, often the user is required to perform a set of computations and traversals more than once (lines 20-25), in order to achieve the desired outcome.

5.5.1 Synthesis Considering the factors described so far, this research proposes to decompose monolithic models into finer grained models with deterministic behavior[69], [72]. Then these finer grained models are composed using a set of predefined composition rules to get more complex computational structures. The observations and experience with the existing implementations show that most complex computations can be modeled in terms of four primitives and the composition rules between them. about a set of primitives (their semantics) and constraints on their use. We will develop a simple declarative API (aka fluent API) and implementation which can be then layered by a DSL.

• Structural

– Graph models with domain information - when using graphs to model kinematic mechanisms, in addition to domain specific graph properties (e.g. inertia, mass matri- ces), one is required to provide semantic information which qualifies those properties. Think of an example where one is to compute Twist of a link. Here Twist is graph’s domain specific property, but it is not well defined. This is because to fully qualify the Twist of the link one requires three further pieces of information (point, coordinate and reference)[58]. Such semantic information should be part of the domain specific graph model, in order for the computational operations to generate valid results. List- ing 5.3 lines 1-2 show one possible formulation. There are already existing libraries like Geometric Relations Semantics[58], which could be used for this purpose. – Computational state primitives - operations on the model change its properties’ values. These changes are reflected in the computational state of the mechanism. For kinematic mechanisms one can distinguish between joint space and segment space (task space) computational states. The computational state can be either immutable or mutable. For instance, for the inverse dynamics algorithm joint poses, rates and ac- celerations are immutable computational states. Additionally, current computational state can depend on previous states. In listing 5.3 lines 4-6 and 16-18 define and use state primitives accordingly.

• Behavioral

– Computational operation primitives - the goal is to define a minimum set of finer grained domain specific operations from which one can compose complex algorithms. The idea relies on the concept of functional composition as defined in functional programming. In the context of kinematics and dynamics, we can distinguish trans- formation and projection operations for example. These operations could then be parametrized for each physical quantity they operate on. For example, transforma- tion of position and twist or projection of wrench and inertia. In order to get more complex operations one would functionally compose them following the set of defined composition rules. This is depicted in listing 5.3 line 14, where operations defined on lines 8-11 are composed into a more complex operation. This operation is then applied with traversal operations on lines 16-18. – Traversal operation primitives - the purpose of traversal operations is to express the whole algorithm, e.g inverse dynamics, in terms of a graph walk. Additionally,

c 2013 by BRICS Team at BRSU 58 Revision 1.0 Chapter 5. Modeling Robot Architectures5.5. DSL for Kinematics and Dynamics Computations

the form of the traversal (depth or breadth first search) defines the schedule of com- putational operations. The traversal operations take the graph model of the kine- matic mechanism and apply computational operations to each vertex and edge while traversing it. The outcome of the traversal is updated computational state graph of the model. In kinematics and dynamics one can distinguish between traversing a single segment or a whole tree. The examples are shown in listing 5.3 lines 16-18.

• Composition rules - the purpose of composition rules is to define the framework which allows the combination of the four primitives above in a deterministic way. These rules are domain specific and are described in detail in section 5.5.2.

1 //graph primitives 2 geometric_semantics::Graph twoArmRobot; 3 4 //state primitives 5 kdl_extensions::JointState const jstate; 6 kdl_extensions::SegmentState lstate; 7 8 //computational operations 9 kdl_extensions::transform oper1; 10 kdl_extensions::transform oper2; 11 kdl_extensions::project oper3; 12 13 //compose computational operations 14 kdl_extensions::complexComputaion operComplex = compose(oper3, oper2, oper1); 15 16 //Traversal Primitives 17 iterateOver(twoArmRobot.link0, jstate, lstate, operComplex); 18 iterateOver(twoArmRobot, vector, vector, operComplex);

Listing 5.3: Example for computation and traversal primitives.

5.5.2 Domain specific composition rules and their semantics The primitives that have been defined in the previous section are independent of a domain. They are applicable in any graph based modeling framework. They serve as some form of language interface to describe problems in terms of graphs and graph algorithms. In order for them to be useful, one requires domain specific implementations of each. Exactly, these specific implementations together with domain specific composition rules form a graph based framework. This framework can then be used to model and solve problems in the specified context. Since for our motion level specification layer the context is robot kinematics and dynamics, the following composition rules are valid for this category of problems. Robot kinematics and dynamics context brings some semantic constraints on operations on physical quantities. These should be taken into account while defining the rules. An example is a natural order of computations that one needs to follow to obtain semantically valid results, e.g. one can not perform transform < T wist > without first performing a transform < P ose > computation4.

• Computational state updates performed through the use of ‘not composed’/primitive com- putational operations and traversal over a segment.

1. Order of precedence of computations - Pose, Twist, AccTwist, Wrench com- putations should only be performed from left to right as given in the list. Listing 5.4 shows the valid order of possible computations.

1 m comp1; 2 m comp2;

4This shows that an operation may be computationally feasible but is not semantically valid

Revision 1.0 59 c 2013 by BRICS Team at BRSU 5.5. DSL for Kinematics and Dynamics ComputationsChapter5. Modeling Robot Architectures

3 ver iterator; 4 5 ] = iterator(twoBranchTree.getSegment("L1"), jstate[0], lstate[0], comp1); 6 ] = iterator(twoBranchTree.getSegment("L1"), jstate[0], lstate[0], comp2);

Listing 5.4: The order of precedence rule for computational operations enforces semantically valid computations.

Note that the lstate[0] argument in the second call has been updated through applying comp1 operation. Though, semantically valid this use of traversal and computation operations carries a performance penalty. It increases linearly with the number of computations that need to be applied at the given link. In the example above it is two computations in two iterations for a single link. A more efficient way would be to apply all computations in a single traversal of the segment as listed below 5.5.

1 m comp1; transform comp2; 2 ver iterator; 3 4 ] = iterator(twoBranchTree.getSegment("L1"),jstate[0],lstate[0],comp2,comp1);

Listing 5.5: The order of precedence rule for computational operations is used in a single segment traversal. Here comp1 is performed before comp2.

In this example there are two computations performed in single traversal of the seg- ment. Again, comp1 is performed before comp2 2. Order of precedence of traversals - in the rule above, while updating a link state with the computation comp1, the initial input state is assumed ‘zero’. That is, there is no dependence one the previous state. In robot kinematics and dynamics this means that operations above updated the link’s state regardless of the state of that link’s parents. Mechanically, this means that only supporting joint contributions of the link are considered. But often segments are part of a bigger kinematic mechanism. Therefore, their state updates also depend on the states of their parents. That is why, in order to achieve semantically valid link state updates in the mechanism, the traversal constraint should be satisfied as depicted in listing 5.6.

1 Pose> comp1; transform comp2; 2 r iterator; 3 4 = iterator(twoBranchTree.getSegment("L1"), jstate[0], lstate[0], comp1); 5 = iterator(twoBranchTree.getSegment("L1"), jstate[0], lstate[0], comp2); 6 7 = iterator(twoBranchTree.getSegment("L2"), jstate[1], lstate[0], comp1); 8 = iterator(twoBranchTree.getSegment("L2"), jstate[1], lstate[1], comp2);

Listing 5.6: The order of precedence rule for (segment) traversals enforces valid computational state dependence for the computations. Note use of the lstate(0) in traversal of L2 segment.

As it can be seen in listing 5.6, segment traversals on L1 and L2 should be performed in the same order as these segments appear in the mechanism. Here L1 is the parent of L2, therefore the L2 link state updates are dependent on the L1 state updates. Hence lstate[0] is provided as an input to the first L2 related operation. A similar effect but computationally more efficient could have been achieved by applying both computations in a single traversal, as depicted below 5.7.

c 2013 by BRICS Team at BRSU 60 Revision 1.0 Chapter 5. Modeling Robot Architectures5.5. DSL for Kinematics and Dynamics Computations

1 rm comp1; 2 rm comp2; 3 Over iterator; 4 5 0] = iterator(twoBranchTree.getSegment("L1"),jstate[0],lstate[0],comp2,comp1); 6 1] = iterator(twoBranchTree.getSegment("L2"),jstate[1],lstate[0],comp2,comp1);

Listing 5.7: The combined use of the order of precedence for (segment) traversals and computational operations.

• Link state updates performed through the use of ‘not composed’/primitive computational operations and traversal over a tree.

1. Order of precedence of computations - Pose, Twist, AccTwist, Wrench com- putations should only be performed from left to right as given in the list. Listing 5.8 shows the valid order of possible computations. Here link computational state graph lstate is updated by P ose computations in the first traversal and during the second call the same state graph is updated by T wist computations for each segment in the tree. On line 7, two computations are performed in the single traversal of the tree mechanism. This is more efficient than traversing the tree twice, as it is done on lines 5-6.

1 nsform comp1; 2 nsform comp2; 3 rateOver traverse; 4 5 verse(a_chain, jstate, lstate, comp1); 6 verse(a_chain, jstate, lstate, comp2); 7 verse(a_chain, jstate,lstate, comp2, comp1);

Listing 5.8: Application of order of precedence of computations rule for tree traversal.

2. Order of precedence of traversals - when designing kinematics and dy- namics algorithms in terms of graph traversals, one has to realize that in some case a set of traversals may be required. These traversals can take place in different di- rections. For instance, to compute inverse dynamics (ID) of a robot one is required to perform two traversal in opposite directions. These are called outward and inward sweeps respectively. In our framework, such cases are addressed through forward and reverse traversals. For instance, forward traversal will update computational state graph of the mechanism for ID. This updated state graph is then used as an input for reverse traversal.

• Composition semantics for computational operations - the rules defined so far involved ‘not composed’ computational operations only. But often the algorithms contain many primi- tive operations. It will become cumbersome to program algorithm by enlisting all possible operations. Our framework allows to functionally compose primitive computational opera- tions into more complex computations. This allows to define complex operations once and apply them many times with different traversals. We distinguish two categories for compu- tational operation composition, explicit composition and context dependent composition. In explicit composition a set of computations are explicitly combined through a compo- sition function to form a new complex computation with the same functionality as those of combined operations. Listing 5.9 shows the application of this rule. Here type of

Revision 1.0 61 c 2013 by BRICS Team at BRSU 5.6. DSL for System Deployment Chapter 5. Modeling Robot Architectures

complexComputation on line 6 is automatically inferred from the type of composed oper- ations.

1 transform comp1; 2 transform comp2; 3 transform comp3; 4 iterateOver traverse; 5 6 complexComputation newComplexOperation = compose(comp2, comp1); 7 complexComputation newComplexOperation2 = compose(comp3, comp2); 8 9 traverse(a_chain, jstate, lstate, comp2, comp1); 10 traverse(a_chain, jstate, lstate, newComplexOperation);

Listing 5.9: The use of composed computational operations with tree traversal.

On the other hand, in context dependent composition there is no explicit call to compose function. Here the composition semantics follows from the context5 in which computational operations and computational state graphs are defined. For instance, performing two con- secutive segment traversals for two neighboring segments should give the same result, as if the traversal was on the chain consisting of these two segments. Listing 5.10 lines 6-10 depicts this situation. The same result can also be achieved using composed computational operations as listed on the lines 12-16.

1 nsform comp1; 2 nsform comp2; 3 rateOver traverse; 4 rateOver iterator; 5 6 ontext dependent composition with two primitive operations 7 ate[0] = iterator(twoBranchTree.getSegment("L1"), jstate[0], lstate[0], comp2, comp1); 8 ate[1] = iterator(twoBranchTree.getSegment("L2"), jstate[0], lstate[0], comp2, comp1); 9 his is equivalent to two calls above 10 verse(a_chain, jstate, lstate, comp2, comp1); 11 12 he same result as above, but using composed operation instead 13 plexComputation newComplexOperation = compose_ternary(comp2, comp1); 14 ate[0] = iterator(twoBranchTree.getSegment("L1"), jstate[0], lstate[0], newComplexComputation ); 15 ate[1] = iterator(twoBranchTree.getSegment("L2"), jstate[0], lstate[0], newComplexComputation ); 16 verse(a_chain,jstate, lstate, newComplexOperation);

Listing 5.10: An example for the context dependent composition.

5.6 DSL for System Deployment

Deploying a complete, integrated robot application on service robots is a cumbersome exercise which is prone to errors. The following anecdote exemplifies this: during RoboCup@Home6 competition 2010 we deployed accidentally the speech recognition component of a service robot to another host while forgetting to maintain the software connection to the microphone. Hence, during the competition the robot was not able to understand speech commands anymore. After a troublesome assessment we identified the error in the deployment description where no software connection checking was performed. In robotics, software deployment is usually achieved through some kind of deployment infrastructure provided by the underlying robot software frameworks. For instance, the tool roslaunch7 of the popular ROS framework takes a description of the

5Here context has the same meaning as in context dependent programming language. 6http://www.robocupathome.org/ 7http://www.ros.org/wiki/roslaunch

c 2013 by BRICS Team at BRSU 62 Revision 1.0 Chapter 5. Modeling Robot Architectures 5.6. DSL for System Deployment architecture as an input and initiates the deployment according to it, e.g., nodes are started, parameters are set and so on. The framework-specific deployment description is created by the system integrator and contains two main parts: a component description and a system description. Depending on the underlying component model of the framework the component description contains information about the provided and required input and output in terms of services, topics or data ports whereas the system description encodes which components interact with each other. As the software infrastructure for real-world applications is easily composed of 20 to 300 components this low-level and framework-specific deployment description becomes hard to maintain. In addition, it is current practice in robotics to implicitly encode knowledge about other architectural aspects (e.g., computational resources) and design decisions in framework and platform-specific tools. Furthermore, heterogenous representations are used during design and development and means for documentation are missing. Therefore, the identification and dissemination of design principles and best practices is limited. Furthermore, as service robots are built by several engineers and developers working on different architectural aspects such information is crucial and should be captured as a part of a deployment description. In summary, software deployment is cumbersome and it remains challenging • to model different architectural aspects in a framework-independent and explicit manner, • to reuse and deploy a configured component-architecture on another platform, and • to ensure properties and check constraints prior to the actual deployment on the robot.

5.6.1 Model-based Software Deployment To improve software deployment in robotics we introduce a model-based software deployment approach (see Figure 5.9) described in the following. The main objective of the approach is to allow systematic deployment taking into account both architectural aspects and constraints introduced by different phases of the development process. Therefore, we decompose a robot ap- plication into software and hardware aspects which are further decomposed in sub-aspects (see Figure 2.1). To encode information about these aspects we propagate model-based techniques, because the concrete models of the aspects are changing rapidly whereas their representation (meta-models) remains rather static. We argue that different architectural aspects are modeled by developers and engineers within different phases of a development process who are not nec- essarily performing the deployment themselves. In Figure 5.9 four phases of the BRICS robot application development process (RAP), which are relevant for the deployment task are shown.

5.6.2 The BRICS Research Camp Use Case To exemplify our approach we make use of a real-world application developed within the scope of the 4th BRICS Research Camp on Robot Software Architectures8. The use case consists of a robot moving through an artificial arena avoiding obstacles. Optionally the robot is instructed to pick-up an object at one location (inside the arena) and delivers this object to another location (see also Figure 5.10). The robot navigation can be performed according to two navigation strategies: map-based (a geometrical map of the environment is provided) or marker-based (the robot simply follows visual markers placed on the floor). This scenario allows the development of different applications that are presented in the following. These applications require several functionalities, such as inverse kinematics, bounding box estimation, navigation, localization and mapping. Even though, the application is simple from a functional point of view the deployment structure is rather complex as several variation points have to be resolved.

8www.best-of-robotics.org/4th_researchcamp

Revision 1.0 63 c 2013 by BRICS Team at BRSU 5.6. DSL for System Deployment Chapter 5. Modeling Robot Architectures

Platform Functional Capability System Building Design Building Deployment

Mechanical Template Runtime Feature Architecture System Architecture Model Model Model Model

Electrical Resolution Architecture Model Model

Feature Selector Computational Architecture Resolution Model Engine

Configured Deployment System Sequence Model Model

Deployment Model

LEGEND Tool M2T transformation DSL Input/Output Model Editor Enriched Use Deployment Phase File

Figure 5.9: The figure visualizes the model-based software deployment approach. The approach includes four phases of the development process shown on top. In each phase models are created. A color coding is used to show which models are created in which phase. The models are either used (referenced) in other models or serve as an input/output for a tool.

Figure 5.10: The fetch and carry application used for evaluation.

Platform Building Phase According to the RAP process model the platform building phase is performed once a good understanding exists of which tasks the target robot application needs to solve and how the environment and task execution context looks. Therefore, in the platform building phase the robot will be configured in terms of required sensors and actuators as well as their mounting and placement on the robot (see Mechanical and Electrical Architecture Model in Figure 5.9). Other configuration choices may pertain to the computational hardware to be integrated (see the Computational Architecture Model (CAM) in Figure 5.9, whose meta-model is depicted in Figure

c 2013 by BRICS Team at BRSU 64 Revision 1.0 Chapter 5. Modeling Robot Architectures 5.6. DSL for System Deployment

ComputationalArchitecureModel name : EString

1 … * 1 … * 0 … * Platform name : EString description : EString 0 … * hostname : EString 1 … * IP: EString 1 … * Processor Memory Device name : EString name : EString name : EString

1 0 … * 1 0 … * 0 … *

Bus Property name : EString name : EString

Figure 5.11: The computational architecture meta-model.

Pick and Place

Local Navigation Strategy Obstacle Avoidance Environment Object Manipulation Object Perception Navigation

Inverse Planned Map Marker DWA VFH Map 1 Map 2 Kinematics Grasping

Figure 5.12: The feature model that describes the functional variability of the case study. Fea- tures with the black circle on the top are mandatory and represent functionalities that have to be present in every application. Features with the white circle represent instead optional func- tionalities. White arcs represent alternative containments (only one sub-feature can be selected) while black arcs depicts or containments (at least one sub-feature has to be selected).

5.11). In robotics, however, modeling the CAM is either neglected or performed in an ad-hoc manner. We propose to adapt existing and industrial-proven CAM modeling approaches such as AADL [65]. In the pick and place use case a KUKA youBot is used. The robot is equipped with two computational hosts: an internal computer and an external computer. The internal host has access to the EtherCat bus which provides access to the joint controllers and the external computer which is more powerful and mounted on the sensor tower of the youBot. To model the CAM we provide a set of primitives shown in Figure 5.11. The CAM meta-model is inspired by the AADL platform meta-model and allows to model the basic structure of a computational host (e.g., CPU, memory, and bus connections) and their properties relevant for deployment such as hostname, IP address, and amount of memory.

Functional Design and Capability Building Phases This subsection describes the Functional Design and the Capability Building phases, which are contained in the blue dashed box depicted in Figure 5.9. These phases and the associated models and tools are the result of previous work which are presented in ([80], [40], [79]) and are available on GitHub.9 In this paper we provide a brief summary. During these phases the engineers design a set of models that describe the architectural and the functional variability of a software product line, which is a family of similar applications that share the same architecture and are built reusing a set of software components [50]. The ultimate goal of these phases is to

9http://robotics-unibg.github.com/FeatureModels

Revision 1.0 65 c 2013 by BRICS Team at BRSU 5.6. DSL for System Deployment Chapter 5. Modeling Robot Architectures automatically generate a model that describes the architecture of a specific application of the product line (the Configured System Model (CSM)) starting from a selection of functionalities. This CSM will be then later deployed on the robot. The architecture of the software product line is represented in a Template System Model (TSM), which specifies: (a) the set of components that can be used for building all the possible applications of the family (some are mandatory, some others optional) and (b) a set of connections among components (some are stable, some others variable). The architectural variability of the product line can be resolved by selecting optional components, their specific implementation, the values of their configuration properties, and the variable connections. In this way starting from the TSM it is possible to generate the CSM. In other words the TSM is a skeleton that can be customized for defining the architecture of a specific application. This model can be designed manually or can be generated by means of a set of automatic transformations as described below. The TSM of the Research Camp use case contains around 25 components that provides several functionalities for moving the base and avoiding obstacles, for implementing the different navigation strategies and for manipulating and perceiving objects. According to the desired application different set of functionalities should be used. While the TSM describes the architectural variability of the product line, a second model is required for orthogonally representing the variability in terms of functionalities. In our previous work we proposed to model the functional variability by means of the Feature Models formal- ism [100]. A feature model is a hierarchical composition of features. A feature defines a software property and represents an increment in program functionality. The action of composing fea- tures, or in other words selecting a subset of functionalities from all the features contained in a feature model, corresponds to defining the configuration of a software that belongs to the product line and provides the required functionalities. The orthogonality between the TSM and the Feature Model is important because they are typically defined by engineers with different competencies. Indeed the design of a good architecture (TSM ) requires advanced software en- gineering techniques, while the functional variability modeling requires a deep understanding of the application domain, which is part of the robotic experts’ knowledge. The functional variability of the Research Camp use case is described in the feature model depicted in Figure 5.12. They are described in the following list which also highlights how the selection of a functionality influences the definition of the application architecture. • Local Navigation: this mandatory feature means that every possible application has to provide basic navigation functionalities such as trajectory generation, obstacle avoidance, mobile base control and localization.

• Navigation Strategy: the alternative containment of this feature represents the possi- bility of choosing between the map-based and the marker-based navigation. Selecting one of the two strategies implies using different components that provide different function- alities (these components are selected from the set of components defined in the TSM ). As a consequence of this choice the connections between the components change. The functionalities required for the navigation have been described in [40] and [79].

• Obstacle Avoidance: the alternative containment of this feature represents a variation point regarding the algorithm used for avoiding obstacles. According to the selected feature we have to use different implementations for the component that provides this functionality.

• Environment: when the robot performs map-based navigation, it can operate in different environments that are represented by different maps. The alternative containment of this feature allows the specification of the map, which will be used by the components that pro- vide the path planner and the localizer functionalities. These components have a property that allows us to define the path of the map description file.

c 2013 by BRICS Team at BRSU 66 Revision 1.0 Chapter 5. Modeling Robot Architectures 5.6. DSL for System Deployment

DeploymentSequenceModel name : EString

1 … * source 1 AbstractSequenceNode Edge name : EString target 1 description : EString

RTTSequenceNode ROSSequenceNode

1 … * 1 … * TaskContext Node (from Configured (from Configured System Model) System Model)

Figure 5.13: The deployment sequence meta-model.

• Object Manipulation: this optional feature represents the capability of manipulating objects. When the engineer decides to deploy this functionality the component that pro- vides the inverse kinematics is mandatory while the component for the Planned Grasped functionality is optional.

• Object Perception: this optional feature represents the last variation point of the product line. This functionality can be used for detecting objects that have to be grasped but also for detecting objects that represent visual markers that the robot has to follow.

The feature models also allow the expression of constraints among the selection of features. In our use case there is a single constraint: when the feature Object Manipulation is selected to be part of the application, the Object Perception feature has to be present. Once the architecture of the product line has been defined and its functional variability modeled, the engineer has to define how the CSM can be automatically generated starting from the TSM and a selection of functionalities (i.e. features). This information is encoded in the Resolution Model, which specifies how the architectural variability of the TSM has to be resolved according to the feature selected in the Feature Model. The resolution model associates each feature with set of transformations, which has to be executed on the TSM when the corresponding feature is selected. These transformations allow us to automatically (a) select which components have to be used and set their implementation, (b) configure the values of their properties, and (c) create and remove connections between them. For example the Resolution Model of our use case specifies that when the feature Marker Based Navigation is selected the components that provide the marker-based functionalities (marker detection, marker path planning, . . . ) have to be used and connected to the components that provide the local navigation functionalities. The last step concerns the automatic generation of the CSM that describes the architecture of a specific application. The engineer uses the Feature Selector 10 to define the set of features that reflect the functional requirements of the desired application. The Resolution Engine receives as input the feature selection and by using the three models described above automatically generates the CSM. Basically the Resolution Engine (a) checks that all the constraints imposed on the Feature Model are satisfied by the feature selection; (b) creates a copy of the TSM ; (c) modifies this copy by applying only the transformations (described in the Resolution Model) that are associated with the selected features. In our use case the engineer can deploy several applications that go from a simple map-based

10The Feature Selector is a graphical editor that allows the user to manually select the desired features.

Revision 1.0 67 c 2013 by BRICS Team at BRSU 5.6. DSL for System Deployment Chapter 5. Modeling Robot Architectures

RuntimeArchitecureModel name : EString

1 … * ExecutionUnit name : EString description : EString

Process Thread 0 … * name : EString name : EString

0 … * Property 0 … * name : EString

Figure 5.14: The runtime architecture meta-model.

DeploymentModel 0 … 1 SequenceModel name: EString (from Deployment Sequence Model) 0 … * 1 1 DeploymentElement Platform name : EString (from Computational description : EString Architecture Model)

1 1 AbstractComponentSet ExecutionUnit name : EString (from Runtime Architecture Model)

RTTComponentSet ROSComponentSet

1 … * 1 … * TaskContext Node (from Configured (from Configured System Model) System Model)

Figure 5.15: The deployment meta-model.

navigation (the selected features are Local Navigation, Marker and one of the Obstacle Avoidance sub-features) to the pick and place of objects with marker-based navigation.

System Deployment Phase

During the System Deployment phase the previous separated architectural aspects and models are composed into a coherent Deployment Model (DM) (see meta-model in Figure 5.13). This is done by mapping the component architecture defined in the CSM to executable units such as processes and threads modeled in the Runtime Architecture Model (RAM) (see meta-model in Figure 5.14) and by mapping the executable units on hosts defined in the CAM. In addition, the Deployment Sequence Model (DSM) (see meta-model in Figure 5.15) defines the order in which the components are deployed. The optional, sequential deployment of components might be required for testing purposes or to cope with the complexity imposed by large-scale component networks (e.g., step-wise and layered deployment). We model the sequence as a directed acyclic graph (DAG). In the Research Camp use case this enables us to encode that the arm components must be deployed before any other components as these perform a homing procedure. Similarly to the CAM, the RAM is inspired by the AADL meta-model on execution units. For the pick and place use case depending on the selected functionality the components are deployed as processes or threads.

c 2013 by BRICS Team at BRSU 68 Revision 1.0 Chapter 5. Modeling Robot Architectures 5.6. DSL for System Deployment

5.6.3 A Domain-specific Language for Deployment To support the model-based deployment approach we developed a Domain-specific Language (DSL) which enables system integrators to create concrete domain models and to generate (en- riched) framework-specific deployment descriptions. We defined the following language require- ments (see also Kolovos et al. [106]).

• Orthogonality. The DSL primitives and abstractions shall correspond to the domain concepts introduced previously. More precisely, each language construct should represent one distinct concept in the domain in order to keep them orthogonal and to ease the development of code generators.

• Supportability. To increase acceptance, a DSL user demands an infrastructure which supports typical activities such as model creation, editing, deleting, and transforming.

• Quality. The DSL should support the development of faultless deployment descriptions via the language itself or through tooling.

5.6.4 Implementation We developed a language infrastructure which allows model editing, validation and generation (see DSL editor and M2T elements in Figure 5.9). The infrastructure is based on the Eclipse- based framework Xtext.11 Whereas the tools related to the feature models and the CSM are graphical and based on the Eclipse Modeling Framework (EMF).12 For the deployment DSL itself the straightforward integration of Xtext in Eclipse helped us to develop the language in a rapid manner. For instance, a DSL language editor with syntax highlighting and code completion can be automatically generated from the meta-models. This built-in support helped us to focus on the development of the language itself. Even though, for some targeted DSL user the use of Eclipse as a language framework might be arguable, the syntax specification is independent of Eclipse and hence, could be used for other frameworks as well. In addition, the deployment description generation facilities have been realized with Xtend,13 a special-purpose programming language which eases the writing of code generators through powerful template expressions.

Syntax and Example In Figure 5.16 an excerpt of the deployment DSL is shown. Here, a component set is defined which imports a set of components from the Configured System Model which models the architecture of the perception components and components related to arm control. Additionally, two different computational hosts are imported which are modeled in two different Computational Architecture Models. In the example, a deployment sequence is defined (the arm control should be deployed before the perception) and later used in the deployment model. To ease modeling we equipped the DSL with some syntactic sugar in the form of high-level keywords such as deploy and on.

Constraint Checking Once domain concepts are represented as meta-models we can also define constraints on concrete domain models which conform to these meta-models. In the work presented here, we applied the

11http://www.eclipse.org/Xtext/ 12http://www.eclipse.org/modeling/ 13http://www.eclipse.org/xtend/

Revision 1.0 69 c 2013 by BRICS Team at BRSU 5.6. DSL for System Deployment Chapter 5. Modeling Robot Architectures

Figure 5.16: An excerpt of the deployment DSL.

OCL (Object Constraint Language)14 formalism to model constraints. We distinguish between two types of constraints:

• Atomic constraints which are valid for domain models conforming to one single meta- model (e.g., constraints for the CSM).

• Composition constraints which appear when we compose the models in the deployment model.

For instance, every CSM is either a ROS-based or Orocos-based component network. Therefore, we modeled several framework-specific constraints such as those shown in Figure 5.17. Here, we

context ConnectionPolicy inv: self.inputPort.type = self.outputPort.type

context ConnectionPolicy inv: self.name.size() <> 0 and self.bufferSize <> 0

Figure 5.17: An excerpt of the Orocos RTT-specifc OCL constraints. ensure that the types of an input and output port match and that a connection between them is well-configured according to the underlying Orocos RTT component model. Another example of atomic constraints is shown in Figure 5.18 where we ensure that every platform has a non-empty hostname. In addition, composition constraints are defined which are more complex.

• Each component of a CSM needs to be deployed on one host defined in the CAM.

14http://www.omg.org/spec/OCL/2.0/

c 2013 by BRICS Team at BRSU 70 Revision 1.0 Chapter 5. Modeling Robot Architectures 5.6. DSL for System Deployment

• Each component of a CSM may not be deployed several times on a host defined in the CAM. • Each connection between components in the CSM implies that the components are de- ployed on the same host defined in the CAM, or that the different hosts are connected with each other. • Each component of a CSM which are deployed as thread and belongs to one process defined in the RAM must run on the same host defined in the CAM.

context Platform inv: self.hostname.size() <> 0

Figure 5.18: An excerpt of the platform OCL constraints.

5.6.5 Model Generation for Robot Software Frameworks To make the DSLs usable in a working environment and to show the feasibility of the overall approach we developed a proof of concept model generator for the ROS framework. As the presented DSL is framework-independent, the DSL concepts and abstractions have to be trans- formed to framework-specific concepts. In this work we generate roslaunch configuration files. The roslaunch tool is the major deployment mechanism in ROS which reads configuration files (specified in a XML format) and launches nodes on specified hosts. We used the top-level elements of the XML specification to transform the corresponding DSL concepts. For each component set specified in the DSL a dedicated ROS launch file is generated. In addition, each component specified in the CSM and included in the DM is transformed to a ROS node ( tag) in the launch file. The host information for each component is used for the tag in the XML specification. Even though, we are able to generate valid roslaunch files it is not possible to transform all concepts such as the deployment sequence which is not considered in the roslaunch tool. As soon as the user initiates roslaunch file generation based on a deployment model where a deployment sequence is specified a warning will appear. Generation for other frameworks such as Orocos RTT will follow the same approach, namely language elements will be transformed to the primitives available in the framework-specific deployment infrastructure. For instance, in Orocos RTT we would generate Lua deployment scripts. Here, the application of a deployment sequence is possible as the Orocos RTT component model foresees a component lifecycle which can be used to start and deploy components in a specific order.

5.6.6 Discussion and Related Work The BRICS Research Camp use case exemplifies the huge variability which characterizes modern robot applications. In order to cope with this variability and the effects on deployment we keep the architectural aspects separate and compose them as late as possible. The DSL snippet shown in Figure 5.16 exemplifies this. To modify the deployment setting (e.g., to deploy components on another host) simply involves the change of a couple of lines. In addition, the feature-oriented approach allows a quick design of the architecture of a specific application by simply selecting its functionalities and reusing existing solutions.

5.6.7 Requirements Revisited • Orthogonality. In general, it is possible to transfer the architectural aspects to concrete language abstractions. Surprisingly a handful high-level keywords such as the before

Revision 1.0 71 c 2013 by BRICS Team at BRSU 5.6. DSL for System Deployment Chapter 5. Modeling Robot Architectures

statement are sufficient to encode rather complex situations. In addition, the models are orthogonal as possible. In fact, only models which are affected by changes in other models are the sequence and the deployment model.

• Supportability. The language infrastructure is Eclipse-based and therefore graphical. However, we observed that developers hesitate to use this environment because they are used to work with command-line tools. Nevertheless we argue that the faster development time that comes with using the tools, they will consider adopting the graphical approach.

• Quality. Thanks to the constraint checking we can identify incomplete and erroneous information already during modeling. Furthermore, the powerful composition constraints are aimed for these situations described in the beginning where a change in one model affects another model. This automatic checking is very advantageous when compared to the manual inspection of framework-specific configuration and deployment scripts.

Lessons Learned We can report the following lessons with respect to the overhead introduced by the DSL.

• From DSL users perspective an external DSL needs to be well integrated in the overall development workflow. Otherwise it is unlikely that the DSL will be used in a daily working environment. In our future work we will therefore integrate the deployment DSL with BRIDE15 which is a model-driven engineering IDE for robotics based on Eclipse. The TSM can be already designed with BRIDE and the feature and resolution models are already integrated within BRIDE. Furthermore, the resulting CSM can be visualized with BRIDE.

• From DSL developers perspective there is clearly the overhead to develop the language itself, corresponding code generation facilities and tooling. For those activities good domain knowledge is demanded. However, modern DSL frameworks such as those applied in this work help to master those activities and to focus on the language design itself.

Related Work Quite recently, model-based software development approaches have become popular in robotics. In [59] the RobotML DSL is presented. The DSL includes a domain model which contains four packages, namely, architecture, behavior, communication and deployment. The deployment package specifies a set of constructs that can be used to define the assignment of a robotic system to a target platform (middleware or simulator). Hence, deployment in RobotML refers to platform-specific code generation. In contrast to our approach the link to the development process is missing and constraints are not checked that elaborately. In [12] the deployment infras- tructure for OpenRTM is presented. Here, deployment is considered as a part of component and system lifecycle management. The authors present a pragmatic approach to deployment, focus- ing on the implementation level details (e.g., how manager services interact and how components are instantiated). To realize the deployment process so-called profile descriptions (component and system profiles) are required. One can view component and system profile descriptions as DSLs. However, in contrast to our approach the focus is on elaborated tooling and not on the composition of different architectural aspects. Similarly, in Schlegel et al. [142] a platform de- scription model is defined which serves as a deployment target. However, the explicit separation of architectural aspects has not been adressed.

15http://www.best-of-robotics.org/bride/

c 2013 by BRICS Team at BRSU 72 Revision 1.0 Chapter 6

Implementing Robot Architectures

6.1 OODL Interface Guidelines

Modern service robots are composed of a wide range of different hardware elements provided by third party vendors, ranging from sensors like monocular cameras, laser scanners, and time-of- flight cameras, to miscellaneous actuators such as motors and drives, mobile bases, or manipula- tors like hands and robot arms. Some of them, in particular high degree-of-freedom manipulators such as the KUKA LWR, are already complex systems themselves. The hardware integration of those sensors and actuators into a complete robotic system is usually achieved through some kind of field bus system. Examples are CAN or ethernet-based systems such as EtherCAT or PROFIBUS. Furthermore, computation is often performed an a range of different computational nodes such as microcontrollers, personal computers, or even programmable logic controllers. The diversity and heterogeneity of hardware usually leads to higher software complexity. Examples are several different representations (data types) for the same sensor data, the need to handle diverse sensor quality, heterogeneity of the hardware interfaces themselves, low-level protocol issues, or the need to distinguish at runtime multiple hardware elements of the same kind. Vendors of hardware components usually provide their own customized software interfaces in the form of hardware device interfaces or device drivers, which allow the user to work with these devices from an application developer perspective. Device drivers are crucial as they provide essential functionality (e.g. establishing communication with the device, initializing the device), but very often these device driver interfaces are not very convenient to use. For instance, interfaces often do not conceal device peculiarities and assumptions like low-level protocols and procedures. In general, device drivers are on a very low-level of abstraction, tightly coupled to the device which they represent, policy-rich, and focused on a limited set of use cases. Therefore, reuse of code on a different platform or within another application context is costly or even impossible. To solve the problems arising from the peculiarities of low-level device drivers and to ease and support the work of application developers we introduced an object-oriented device layer. This layer serves as a programming abstraction for common robotic devices (e.g. laser scanners or cameras). We also introduce a set of guidelines on how to develop suitable abstractions for various robotic devices. Within the BRICS project we have applied these guidelines during the development of several software abstractions for the following robotic hardware:

• youBot

• ...

A well-proven, standard approach in systems and software engineering to deal with very

73 6.1. OODL Interface Guidelines Chapter 6. Implementing Robot Architectures complex systems is to structure them into layers of abstract machines. Within the scope of the BRICS project we have applied this paradigm to the domain of robotics. This led to the layered BRICS architecture shown in Figure 6.1. For the purpose of these guidelines, the bottom three abstraction layers are briefly explained here: • Hardware Elements: On this level, we have actual hardware components such as sensors and actuators.

• Hardware Device Interfaces: The vendor-supplied software (aka device drivers) to use a hardware component is on this level. Here we face, amongst other issues, heterogeneity of hardware interfaces.

• Object-Oriented Device Layer: This layer provides object-oriented wrapper classes for vendor-specific interfaces. Interface design follows well-defined guidelines and harmoniza- tion of interfaces is fostered through abstraction hierarchies.

Robot Application Layer

Composite Component Layer

Base Component Layer

Network-Transparent Services Layer

Refactored Object-Oriented Algorithms

Object-Oriented Device Layer

Legacy Algorithms

Hardware Device Interfaces

Hardware Elements

Figure 6.1: The different system abstraction layers used in BRICS.

The object-oriented device layer serves the role of a mediator1 and facade2 between lower and higher levels of the BRICS architecture. More precisely, the object-oriented device layer shall have the following quality attributes:

1http://en.wikipedia.org/wiki/Mediator_pattern 2http://en.wikipedia.org/wiki/Facade_pattern

c 2013 by BRICS Team at BRSU 74 Revision 1.0 Chapter 6. Implementing Robot Architectures 6.1. OODL Interface Guidelines

• Reusability. The layer shall be usable and reusable within different application contexts. For instance, a laser scanner abstraction shall be reusable for a mapping as for a navigation or tracking task. Moreover, reuse on different robots, platforms, and within different robotic software frameworks shall be possible. In short, the layer must be independently usable (aka stand-alone).

• Extensibility. As novel hardware devices will arrive in the future it must be possible to extend the layer through new devices.

• Usability. From an application developer perspective the layer shall constrain the devel- oper in such a way that only the right things are doable. Examples are the abstinence of pointers in C++, the continuous checking (e.g. NULL pointer) of method parameters, or proper error handling.

• Portability. The limits of portability are given by the hardware elements itself. Some device drivers are simply not available for certain platforms (e.g. operating systems, pro- gramming languages, or computer architectures). However, partial portability shall be supported. For instance, an object-oriented device layer programmed in C++ under Linux shall also be available on another Linux (or Unix-like) distribution.

To achieve these goals a set of guidelines for designing this object-oriented device layer has been developed. The guidelines are recommendations for the design and development of software abstractions in the form of classes, interfaces, and interface or class hierarchies. The guidelines suggested are based on the object-oriented programming paradigm which is not really common for vendor-specific device interfaces or drivers. The design of these vendor-specific device interfaces is often performed by developers who are not computer scientists. Although they have some training in programming, mostly in languages like Assembly or C, such programming courses often focus on teaching the technicalities of the language and skills of using it in practice, but spend little if any time on good interface design. Roboticists faced with the task of integrating a wide variety of sensing and actuating hard- ware devices into a working robot system, are those who eventually encounter the outcomes of such interface “designs”: the heterogeneity in the range of hardware devices reappears in an analogous heterogeneity of their software interfaces. Nowadays, the programmers implementing core software functionalities and integrating them into a complete robot control architecture, however, are usually well-trained in object-oriented design and programming. They consider most vendor-supplied device interfaces as fairly low level, cumbersome, and hard to use. The purpose of the OODL is to remedy this situation. It allows to encapsulate the peculiari- ties of low-level interface and to provide cleanly designed object-oriented interfaces to devices. The general principles of object-oriented design (OOD) and object-oriented programming (OOP) allow for well-structured programming but do not automatically guarantee reusability and main- tainabilty. A decent object-oriented design based on design principles and patterns is essential. Object-oriented programming is around for a long time already and has a very large community, which includes people with lots of OOP experience. Not surprisingly, these people have given substantial thought to how good object-oriented programs should be designed. Based on the experiences made in many large OPP software projects, these people have come up with a small set of object-oriented design guidelines like the SOLID3 principle and design patterns. Please refer to the book by Robert C. Martin [?] for detailed explanations. When the OODL layer has an object-oriented design, it is just natural to use an object- oriented programming language for its implementation. C++ and Java are the two most com- monly used OOP languages in practice today. Programmers with a good background and un-

3http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)

Revision 1.0 75 c 2013 by BRICS Team at BRSU 6.1. OODL Interface Guidelines Chapter 6. Implementing Robot Architectures derstanding of abstraction concepts like classes and interfaces and experience in applying these concepts in practice are in abundance. Furthermore, we still recommend to use C++ as the primary OOP language for the implementation of the OODL layer. So far, it is far simpler to interface to C and/or assembler interfaces from C++ than from Java. Until recently, Java required to go through JNI4, which is quite cumbersome to use in itself. However, as the Java language seems to have a much more active community and sees still more active development, this situation may change in the future. Some hopes in this direction are justified, e.g. by the decision of Google Inc. to provide a complete ROS implementation in Java.5 Despite our rec- ommendation for C++, the guidelines presented here do not depend on this choice and apply to OODLs designed and implemented in any OOP language.

Define domain-specific data types. Data type definitions encapsulate domain knowledge. For instance, in robotics data type definitions encode knowledge about a particular device. We recommend to carefully define data types, because data types are a major source for communication between different stakeholders, for example, between hardware and device driver developers and system integrators in robotics. Moreover, we recommend to define data types independent of interfaces in order to enable reuse of data types for different purposes (e.g. logging or visualization).

• Make careful use of templates. Programming with templates (or generics in Java) is a powerful technique to operate with generic types. In particular in situations, which require several, but similar data types, templates are feasible. They foster reuse, because there is no need to rewrite classes. • Make units explicit. If a data type represents a measurable unit, which is often the case for sensor or actuator data, make clear which unit it represents. Mixing up units leads to serious and hard to find errors. A famous example is the Mars Climate Orbiter 6 launched by NASA in 1998. When it was maneuvered too close to the Martian surface and burned because of mixing up the metric Newton and imperial pound-force units. In the optimal case, units should be checked directly by the compiler (like in Boost Units7). • Make ranges explicit. Active checking of ranges for certain data sets is important independent of the domain, e.g. checking the length of postal codes or the format of an URL string. When it comes to hardware interaction it is even more important to check the ranges of data, since range violations can actually lead to hardware damages. Imagine a robot manipulator, which is able to move its base joint from 0−340 degree, is sent a value of −100. Such input values can have an unpredictable behavior or, in the worst case, break the joint. • Be aware of reference frames. Data provided by robotics hardware can usually be interpreted correctly only in relation with a specific reference frame. The refer- ence frame can not be defined in isolation and requires knowledge about the physical structure of the overall system, which defines the relationships between coordinate frames involved. Such information cannot be handled locally (i.e. in the OODL), but the local reference frame should be documented. For instance, the origin of a laser beam. • Make temporal properties explicit. In order to combine or fuse different data provided by robotic hardware, such as range information from laser scanners and

4Java Native Interface (JNI) http://en.wikipedia.org/wiki/Java_Native_Interface 5ROS Java http://code.google.com/p/rosjava/ 6Mars Climate Orbiter http://en.wikipedia.org/wiki/Mars_Climate_Orbiter 7Boost Units http://www.boost.org/doc/libs/1_47_0/doc/html/boost_units.html

c 2013 by BRICS Team at BRSU 76 Revision 1.0 Chapter 6. Implementing Robot Architectures 6.1. OODL Interface Guidelines

odometry (e.g., for mapping purposes), temporal information is needed about when exactly the data has been measured. A well-known approach is to use time stamps in the data types. Define small, coherent sets of methods for an interface An interface should be respon- sible for one purpose (e.g. an interface just for configuration of a device). Such a design leads usually to a small set of methods fostering decoupling. • Avoid long parameter lists. Method signatures with long parameter lists are hard to read and difficult to maintain. We recommend to encapsulate the parameters in dedicated parameter classes. This enables you to extend the set of parameters without changing the method signature itself. • Make use of configuration classes and objects. Some devices used in robotics, such as cameras, often offer a huge set of configuration options, which may range from saturation, white balance, to resolution and shutter time. This abundance of parameters induces an enormous configuration space. We recommend to develop dedicated and explicit configuration classes such that the configuration options are accessible and maintainable for the developer. However, in many situations the same or similar configuration options are applicable for serveral use cases. The same camera configuration options for a visual servoing application might be also feasible for a people detection task. Therefore, we recommend to provide a set of ready-to-use configuration objects where the majority of options are already fixed. Abstract through interfaces, not classes. In object-oriented programming hierarchies are used to represent layers of abstraction. Often layers are represented as classes with in- heritance relations. We recommend to use interfaces (aka abstract classes in C++) rather than pure classes. This makes implementations more flexible, because interfaces and imple- mentations become more loosely coupled. Furthermore, interfaces foster exchangeability (e.g. replacement of the implementation of an interface). This recommendation is based on the OCP principle from object oriented design. In robotics, where device interfaces differ widely, but are used for similar purposes, this guide- line is very feasible. For instance, laser range finders provide range information in the form of Cartesian coordinates or distances through different hardware interfaces, such as USB, CAN, or RS232. Although a single, coherent software interface can be provided for all devices, their implementations may differ widely. Design a consistent error handling concept. Dealing with robotics hardware often involves having to cope with legacy code like device drivers. There is, however, no commonly used standard on how to deal with errors. Each piece of legacy code follows a different concept, like exceptions, error codes, and specific error classes. For the sake of harmonization, we recommend to establish a common error handling concept. • Make errors explicit. Make it explicit if a method could produce errors. For instance, in contrast to Java, in C++ it is not required to indicate whether a method might raise an exception or not. This leads to situations where developers forget to catch exceptions. • Handle errors locally whenever possible. Do not defer errors to someone else, if you can handle within your scope (e.g. class or method). Do not make implicit assumptions. Motivation: Information is more accessible and can be checked by the compiler when it is modeled in the code (e.g. units). Unfortunately, not all information can be easily encoded in C++.

Revision 1.0 77 c 2013 by BRICS Team at BRSU 6.1. OODL Interface Guidelines Chapter 6. Implementing Robot Architectures

• Avoid platform assumptions. Hardware and software assumptions such as amount and mapping of memory, type of threading model, scheduling, and type of computer architectures as 32 versus 64 bit should not be be encoded in the device layer. For instance, system calls as sleep(), wait(), fork(), and ioctl() behave differ- ently on different operating systems and platforms and should therefore be avoided. We recommend to use abstraction facilities provided by third-party libraries such as Boost8, pocolibs9, or ACE10. • Carefully select and document software dependencies. As mentioned before, the object-oriented device layer shall be usable in a stand-alone manner. At best, it only depends on the programming language and the hardware driver. However, very often third-party libraries are used to achieve certain goals (e.g. unit representation with Boost.Units 11). The decision whether or not to use a third-party library is a tightrope walk. We recommend to carefully balance the advantages (e.g. faster devel- opment) and disadvantages (e.g. reduced portability) of using third-party libraries. • Avoid stateful interfaces. Robotic device drivers are very often stateful. For example, in order to receive data from a camera the programmer first needs to connect to the camera (software-wise, not physically). Such stateful behavior can be hidden in a stateless interface, e.g. by automatically connecting to the camera whenever data is required . Stateless interfaces increase the ease of use. However, sometimes the application developer may need to know the device state (e.g. for calibration of a manipulator); in such cases it is not recommended to hide the states. • Avoid usage assumptions. Software developers are usually aware of the pitfalls of their software. In short, they know how to use it in such a manner that it does not break. We recommend to assume that the actual user might misuse the software. Hence, means against misuse should be implemented. For instance, wrong parame- ter passing (e.g. wrong types, NULL pointers, or unfeasible values) could be easily checked. • Avoid timing assumptions. A method should be callable any time. There should be no timing constraints when or when not to call a method. For example, a laser scanner which is able to provide data with a frequency of 5Hz should not break when the application invokes the getData() with 10Hz. • If you make assumptions, make them explicit. When we work with real hard- ware we have to consider specific properties which can not be encoded in software. For instance, a robot manipulator with relative encoders (e.g. KUKA youBot manipulator and Neuronics Katana) must be moved to be initialized which requires free space to move the manipulator. Such assumptions need to be documented.

Use coding conventions. A common practice in professional software development is to establish common coding conventions within a project team. We striongly recommend to adopt, possibly with a few modifications, one of the readily available coding conventions. Good examples would be C++ Coding Conventions developed by Google12 or the Java Coding Conventions developed by Sun Microsystems13. It is not so important, which coding conventions are adopted; the actual decision should be taken by the development team itself, not by a single project manager. But it is very important that everyone in a project

8http://www.boost.org/ 9http://www.appinf.com/en/products/pocolibs.html 10http://www.cs.wustl.edu/~schmidt/ACE.html 11http://www.boost.org/doc/html/boost_units.html 12http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml 13http://www.oracle.com/technetwork/java/codeconv-138413.html

c 2013 by BRICS Team at BRSU 78 Revision 1.0 Chapter 6. Implementing Robot Architectures 6.1. OODL Interface Guidelines

team applies them rigorously and consistently. Naming of methods, members, classes, interfaces, and methods should follow a consistent set of rules which foster readability. Choosing names well is an important part of code documentation.

Revision 1.0 79 c 2013 by BRICS Team at BRSU 6.1. OODL Interface Guidelines Chapter 6. Implementing Robot Architectures

c 2013 by BRICS Team at BRSU 80 Revision 1.0 Bibliography

[1] ABB Flexible Automation, RAPID Reference Manual, baseware os 3.1 rev.1 edition.

[2] Fanuc Robotics Inc., KAREL Reference Manual, 1999.

[3] KUKA Roboter Gmbh, KR C2/KR C3 Expert Programming. KUKA System Software (KSS), release 5.2 edition, 2003.

[4] Player 2.0 interface specification reference. [online] http://playerstage.sourceforge.net/doc/ Player-2.0.0/player/group–libplayercore.html, 2005.

[5] ROS developer documentation. [online] http://www.ros.org/wiki/ROS/Concepts, January 2010.

[6] ROS roscpp client library api reference. [online] http://www.ros.org/wiki/roscpp/Overview/Initialization and Shutdown, January 2010.

[7] R. Alami, R. Chatila, S. Fleury, M. Ghallab, and F. Ingrand. An architecture for autonomy. International Journal of Robotics Research, 17(4), April 1998.

[8] R. Alami, R. Chatila, S. Fleury, M. Ghallab, and F. Ingrand. An architecture for autonomy. International Journal of Robotics Research, 17(4), April 1998. valued 4.

[9] R. Alami, R. Chatila, F. Ingrand, and F. Py. Dependability issues in a robot control archi- tecture. In 2nd IARP/IEEE-RAS Joint Workshop on Technical Challenge for Dependable Robots in Human Environments. LAAS-CNRS, October 7- 8 2002.

[10] I. Alexander and L. Beus-Dukic. Discovering Requirements: How to Specify Products and Services. Wiley, April 2009.

[11] Y. Amir, C. Danilov, M. Miskin-Amir, J. Schultz, and J. Stanton. The spread toolkit: Architecture and performance. Technical report, John Hopkins University, 2004.

[12] N. Ando, S. Kurihara, G. Biggs, T. Sakamoto, H. Nakamoto, and T. Kotoku. Software deployment infrastructure for component based rt-systems. Journal of Robotics and Mecha- tronics, 23(3):350–359, 2011.

[13] N. Ando, T. Suehiro, K. Kitagaki, T. Kotoku, and W. keun Yoon. RT-Middleware: Dis- tributed component middleware for RT (robot technology). In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2005), pages 3555–3560, Edmonton, Canada, 2005.

[14] N. Ando, T. Suehiro, K. Kitagaki, T. Kotoku, and W.-K. Yoon. Rt-component object model in rt-middleware- distributed component middleware for rt (robot technology). In IEEE International Symposium on Computational Intelligence in Robotics and Automation, June 2005.

81 Bibliography Bibliography

[15] P. André, G. Ardourel, and C. Attiogbé. Defining component protocols with service compo- sition: illustration with the kmelia model. In Proceedings of the 6th international conference on Software composition, SC’07, pages 2–17, Berlin, Heidelberg, 2007. Springer-Verlag. [16] V. Andronache and M. Scheutz. Ade - an architecture development environment for virtual and robotic agents. In In AT2AI-4: Proceedings of the Fourth International Symposium "From Agent Theory to Agent Implementation", 2004. [17] V. Andronache and M. Scheutz. Integrating theory and practice: The agent architecture framework apoc and its development environment ade. In In Proceedings of AAMAS. ACM Press, 2004. [18] G. Angelis, J. Elfring, R. Janssen, and R. van de Molengraft. Room layout and object definition. Technical report, Technical University of Eindhoven, March 2010. [19] F. Arbab. Reo: a channel-based coordination model for component composition. Mathe- matical. Structures in Comp. Sci., 14:329–366, June 2004. [20] F. Armour and G. Miller. Advanced Use Case Modeling: Software Systems. Addison-Wesley Professional, January 2001. [21] C. Atkinson, C. Bunse, H.-G. Gross, and C. Peper, editors. Component-based Software Development for Embedded Systems: An Overview of Current Research Trends, volume 3778 of Lecture Notes in Computer Science. Springer, december edition, 2005. [22] L. Bass, P. Clements, and R. Kazman. Software Architecture in Practice. Addison Wesley Professional, series: The SEI series in Software Engineering, 2 edition, 2003. [23] D. S. Batory and B. J. Geraci. Validating component compositions in software system generators. Technical report, University of Texas at Austin, Austin, TX, USA, 1995. [24] L. L. Beck and T. E. Perkins. A survey of software engineering practice: Tools, methods, and results. IEEE Trans. Softw. Eng., 9:541–561, September 1983. [25] B. Berenbach, D. Paulish, J. Kazmeier, and A. Rudorfer. Software and Systems Require- ments Engineering: In Practice. McGraw-Hill Osborne Media, March 2009. [26] R. Beuran, J. Nakata, T. Okada, Y. Tan, and Y. Shinoda. Real-time emulation of net- worked robot systems. In Proceedings of the 1st international conference on Simulation tools and techniques for communications, networks and systems & workshops, Simutools ’08, pages 56:1–56:10, ICST, Brussels, Belgium, Belgium, 2008. ICST (Institute for Com- puter Sciences, Social-Informatics and Telecommunications Engineering). [27] S. Beydeda, M. Book, and V. Gruhn. Model-Driven Software Development. Springer, 2010. [28] G. Biggs, N. Ando, and T. Kotoku. Native robot software framework inter-operation. In Proceedings of the Second International Conference on Simulation, Modeling, and Program- ming for Autonomous Robots (SIMPAR), 2010. [29] R. Black. Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing. Wiley, August 2009. [30] B. S. Blanchard. Systems Engineering Management. Wiley, July 2008. [31] P. Bonasso, J. Firby, E. Gat, D. Kortenkamp, D. Miller, and M. Slack. Experiences with an architecture for intelligent, reactive agents. Journal of Experimental and Theoretical Artificial Intelligence, 9(1), 1997.

c 2013 by BRICS Team at BRSU 82 Revision 1.0 Bibliography Bibliography

[32] B. Boone, T. Verdickt, B. Dhoedt, and F. De Turck. Design time deployment optimization for component based systems. In Proceedings of the 25th conference on IASTED Interna- tional Multi-Conference: Software Engineering, pages 242–248, Anaheim, CA, USA, 2007. ACTA Press. [33] L. C. Briand, Y. Labiche, and M. M. Sówka. Automated, contract-based user testing of commercial-off-the-shelf components. In Proceedings of the 28th international conference on Software engineering, ICSE ’06, pages 92–101, New York, NY, USA, 2006. ACM. [34] A. Brooks, T. Kaupp, and A. Makarenko. Orca: Components for robotics. In IEEE/RSJ International Conference on Intelligent Robots and Systems, Workshop on Robotic Stan- dardization. 2006. [35] A. Brooks, T. Kaupp, and A. Makarenko. Building a software architecture for a human- robot team using the orca framework. In In IEEE International Conference on Robotics and Automation, 2007. [36] A. Brooks, T. Kaupp, A. Makarenko, S. Williams, and A. Oreback. Orca: a component model and repository. Software Engineering for Experimental Robotics. Springer Tracts in Advanced Robotics, 30:231–251, 2007. [37] A. Brooks, T. Kaupp, A. Makarenko, S. Williams, and A. Orebaeck. Orca overview. In http://orca-robotics.sourceforge.net/overview.html. 2004. [38] A. Brooks, T. Kaupp, A. Makarenko, S. Williams, and A. Orebaeck. Towards component- based robotics. In In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems IROS, 2005. [39] A. Brooks, T. Kaupp, A. Makarenko, S. Williams, and A. Orebaeck. Towards component- based robotics. In In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems IROS, 2005. [40] D. Brugali, L. Gherardi, A. Luzzana, and A. Zakharov. A reuse-oriented development process for component-based robotic systems. In 3rd International Conference on Sim- ulation, Modeling and Programming for Autonomous Robots (SIMPAR 2012), Tsukuba, Japan, November 5-8 2012. [41] D. Brugali and P. Scandurra. Component-based robotic engineering (part i) [tutorial]. Robotics Automation Magazine, IEEE, 16(4):84 –96, december 2009. [42] D. Brugali and A. Shakhimardanov. Component-based robotic engineering (part ii). IEEE Robot. Automat. Mag., 17(1):100–112, 2010. [43] H. Bruyninckx. Open robot control software: the OROCOS project. In IEEE International Conference on Robotics & Automation, Seoul, Korea, May 21-26 2001. [44] H. Bruyninckx, P. Soetens, and B. Koninckx. The real-time motion control core of the Orocos project. In Proceedings of the International Conference on Robotics and Automation (ICRA), 2003. [45] H. R. Burris. Instrumented architectural level emulation technology. In Proceedings of the June 13-16, 1977, national computer conference, AFIPS ’77, pages 937–946, New York, NY, USA, 1977. ACM. [46] F. Buschmann, K. Henney, and D. C. Schmidt. Pattern-Oriented Software Architecture, Volume 4: A Pattern Language for Distributed Computing. Wiley, 2007.

Revision 1.0 83 c 2013 by BRICS Team at BRSU Bibliography Bibliography

[47] F. Buschmann, R. Meunier, H. Rohnert, P. Sommerlad, and M. Stal. Pattern-Oriented Software Architecture, Volume 1: A System of Patterns. Wiley, 1996.

[48] X. Chen, A. Jacoff, P. Lima, and D. Nardi. Benchmarking intelligent (multi-)robot systems. http://labrococo.dis.uniroma1.it/bimrs/, August 2010.

[49] B.-W. Choi and E.-C. Shin. An architecture for OPRoS based software components and its applications. In IEEE International Symposium on Assembly and Manufacturing, 2009.

[50] P. Clements and L. Northrop. Software Product Lines: Practices and Patterns. Addison- Wesley, 2002.

[51] P. C. Clements and L. M. Northrop. Software architecture: An executive overview. Tech- nical report, Technical Report CMU/SEI-96-TR-003, 1996.

[52] Y. Coady, G. Kiczales, J. S. Ong, A. Warfield, and M. Feeley. Brittle systems will break - not bend: can aspect-oriented programming help? In Proceedings of the 10th workshop on ACM SIGOPS European workshop, EW 10, pages 79–86, New York, NY, USA, 2002. ACM.

[53] A. Cockburn. Writing Effective Use Cases. Addison-Wesley Professional, October 2000.

[54] A. Cockburn. Agile Software Development: The Cooperative Game. Addison Wesley, 2nd edition edition, October 2006.

[55] M. Cohn. User Stories Applied: For Agile Software Development. Addison Wesley Profes- sional, series: The SEI series in Software Engineering, March 2004.

[56] M. Cohn. Succeeding with Agile Software Development Using Scrum. Addison Wesley, November 2009.

[57] I. Crnkovic and M. Larsson, editors. Building Reliable Component-Based Software Systems. Artech House Publishers, 1 edition, July 2002.

[58] T. De Laet, S. Bellens, H. Bruyninckx, and J. De Schutter. Geometric relations between rigid bodies: From semantics to software. IEEE Robotics & Automation Magazine, 2012.

[59] S. Dhouib, S. Kchir, S. Stinckwich, T. Ziadi, and M. Ziane. Robotml, a domain-specific lan- guage to design, simulate and deploy robotic applications. In I. Noda, N. Ando, D. Brugali, and J. J. Kuffner, editors, SIMPAR, volume 7628 of Lecture Notes in Computer Science, pages 149–160. Springer, 2012.

[60] R. Diankov and J. Kuffner. OpenRAVE: A Planning Architecture for Autonomous Robotics. Technical Report CMU-RI-TR-08-34, Robotics Institute, Carnegie Mellon Uni- versity, Pittsburgh, PA, July 2008.

[61] B. Dittes and C. Goerick. Intelligent system architectures - comparison by translation. In Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on, pages 1015 –1021, sept. 2011.

[62] F. Doucet, S. Shukla, and R. Gupta. Structural component composition for system-level models, pages 57–81. Kluwer Academic Publishers, Norwell, MA, USA, 2004.

[63] B. P. Douglass. Design Patterns for Embedded Systems in C: An Embedded Software Engineering Toolkit. Newnes, Newton, MA, USA, 1st edition, 2010.

c 2013 by BRICS Team at BRSU 84 Revision 1.0 Bibliography Bibliography

[64] R. Featherstone. Rigid body dynamics algorithms, volume 49. Springer Berlin:, 2008.

[65] P. H. Feiler, D. P. Gluch, and J. J. Hudak. The architecture analysis & design language (AADL): An introduction. Technical Report CMU/SEI-2006-TN-011, Software Engineer- ing Institute, Carnegie Mellon University, 2006.

[66] M. Felis. RBDL: Rigid Body Dynamics Library, 2011.

[67] Y. Feng, G. Huang, Y. Zhu, and H. Mei. Exception handling in component composition with the support of middleware. In Proceedings of the 5th international workshop on Software engineering and middleware, SEM ’05, pages 90–97, New York, NY, USA, 2005. ACM.

[68] J. L. Fernandez, R. Simmons, R. Sanz, and A. R. Diguez. A robust stochastic supervision architecture for an indoor mobile robot. In In Proceedings of the International Conference on Field and Service Robotics (FSR), June 2001.

[69] S. Fleury and M. Herrb. Genom user’s guide. Technical report, LAAS CNRS, RIA Group, 2003.

[70] S. Fleury, M. Herrb, and R. Chatila. GenoM: A tool for the specification and the imple- mentation of operating modules in a distributed robot architecture. In In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 842–848, September 1997.

[71] S. Fleury, M. Herrb, and R. Chatila. GenoM: A tool for the specification and the im- plementation of operating modules in a distributed robot architecture. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 842–848, September 1997.

[72] S. Fleury, M. Herrb, and R. Chatila. Genom: A tool for the specification and the imple- mentation of operating modules in a distributed robot architecture. In In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 842–848, September 1997.

[73] A. Flores and M. Polo. Testing-based process for evaluating component replaceability. Electron. Notes Theor. Comput. Sci., 236:101–115, April 2009.

[74] E. Gamma, R. Helm, R. E. Johnson, and J. Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 1995.

[75] W. Garage. ROS developer documentation. [online] http://www.ros.org/wiki/ROS/Concepts, January 2010.

[76] E. Gat. On three-layer architectures. In Artificial Intelligence and Mobile Robots. MIT/AAAI Press, 1997.

[77] B. Gerkey and et al. Player 2.0 Interface Specification reference. Player/ Stage / Gazebo Project, 2005.

[78] B. Gerkey, R. Vaughan, K. Stoey, A. Howard, G. Sukhtame, and M. Mataric. Most valuable Player: A robot device server for distributed control. In In Proc. of the IEEE/RSJ Inter- national Conference on Intelligent Robots and Systems (IROS), pages 1226–1231, 2001.

[79] L. Gherardi. Variability Modeling and Resolution in Component-based Robotics Systems. PhD thesis, Università degli Studi di Bergamo, 2013.

Revision 1.0 85 c 2013 by BRICS Team at BRSU Bibliography Bibliography

[80] L. Gherardi and D. Brugali. An eclipse-based feature models toolchain. In 6th Italian Workshop on Eclipse Technologies (EclipseIT 2011), Milano, Italy, September 22-23 2011.

[81] N. S. Gill and P. Tomar. Modified development process of component-based software engineering. SIGSOFT Softw. Eng. Notes, 35:1–6, March 2010.

[82] J. O. Grady. System Engineering Deployment. CRC Press, July 1999.

[83] A. Green, H. Hüttenrauch, and E. Topp. Measuring up as an intelligent robot – on the use of high-fidelity simulations for human-robot interaction research. In Proc. of perMIS’06: Performance Metrics for Intelligent Systems, Gaithersburg, MD, USA, August 2006. NIST.

[84] C. Grelle, L. Ippolito, V. Loia, and P. Siano. Agent-based architecture for designing hybrid control systems. Inf. Sci., 176:1103–1130, May 2006.

[85] V. Handziski, J. Polastre, J.-H. Hauer, C. Sharp, A. Wolisz, D. Culler, and D. Gay. Hard- ware abstraction architecture. tinyos.

[86] S. V. Hashemian and F. Mavaddat. A logical reasoning approach to automatic composition of stateless components. Fundam. Inf., 89:539–577, December 2008.

[87] N. Hawes, M. Brenner, and K. Sjöö. Planning as an architectural control mechanism. In HRI ’09: Proceedings of the 4th ACM/IEEE international conference on Human robot interaction, pages 229–230, New York, NY, USA, 2009. ACM.

[88] N. Hawes and M. Hanheide. CAST: Middleware for memory-based architectures. In Pro- ceedings of the AAAI Robotics Workshop: Enabling Intelligence Through Middlware, July 2010.

[89] N. Hawes and J. Wyatt. Engineering intelligent information-processing systems with CAST. Adv. Eng. Inform., 24(1):27–39, 2010.

[90] F. Hegger, N. Hochgeschwender, G. K. Kraetzschmar, and P. Ploeger. People detection in 3d point clouds using local surface normals. In Proceedings of the 16th annual RoboCup International Symposium, 2012. to appear.

[91] G. T. Heineman and W. T. Councill. Component-Based Software Engineering: Putting the Pieces Together. Addison-Wesley Professional, June 2001.

[92] T. Henne, A. Juarez, M. Reggiani, and E. Prassler. Towards autonomous design of exper- iments for robots. In Proc. of CogRob Workshop, ECAI-2008, 2008.

[93] R. S. Henry George Liddell. A greek-english lexicon. [online] http://www.perseus.tufts.edu/hopper/.

[94] P. Hintjens. ZeroMQ: Messaging for Many Applications. O’Reilly Media, 2013.

[95] H. Hirukawa and F. K. S. Kajita. OpenHRP: Open Architecture Humanoid Robotics Plat- form, volume 6/2003 of Springer Tracts in Advanced Robotics. Springer Berlin / Heidelberg, 2003.

[96] G. Hirzinger, B. Brunner, K. Landzettel, N. Sporer, J. Butterfaß, and M. Schedl. Space robotics — DLR’s telerobotic concepts, lightweight arms and articulated hands. Auton. Robots, 14:127–145, March 2003.

[97] D. K. Hitchins. Systems Engineering: A 21st Century Systems Methodology. Wiley, January 2008.

c 2013 by BRICS Team at BRSU 86 Revision 1.0 Bibliography Bibliography

[98] B. Horling, V. Lesser, R. Vincent, and T. Wagner. The soft real-time agent control archi- tecture. Autonomous Agents and Multi-Agent Systems, 12:35–91, January 2006.

[99] M. Johnstone, D. Creighton, and S. Nahavandi. Enabling industrial scale simulation/emu- lation models. In Proceedings of the 39th conference on Winter simulation: 40 years! The best is yet to come, WSC ’07, pages 1028–1034, Piscataway, NJ, USA, 2007. IEEE Press.

[100] K. Kang. Feature-oriented domain analysis (FODA) feasibility study. Technical report, DTIC Document, 1990.

[101] K. Kanoun, H. Madeira, and J. Arlat. A framework for dependability benchmarking. CiteseerX, 2002.

[102] P. Kaur and H. Singh. Version management and composition of software components in different phases of software development life cycle. SIGSOFT Softw. Eng. Notes, 34:1–9, July 2009.

[103] I.-G. Kim, D.-H. Bae, and J.-E. Hong. A component composition model providing dynamic, flexible, and hierarchical composition of components for supporting software evolution. J. Syst. Softw., 80:1797–1816, November 2007.

[104] A. Kleppe, J. Warmer, and W. Bast. MDA Explaind: The Model Driven Architecture: Practice and Promise. Addison Wesley, May 2003.

[105] S. Knoop, S. Vacek, R. Zollner, C. Au, and R. Dillmann. A corba-based distributed software architecture for control of service robots. In Intelligent Robots and Systems, 2004. (IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on, volume 4, pages 3656 – 3661 vol.4, 2004.

[106] D. S. Kolovos, R. F. Paige, T. Kelly, and F. A. C. Polack. Requirements for Domain-Specific languages. In Proceedings of the 1st ECOOP Workshop on Domain-Specific Program De- velopment (DSPD 2006), 2006.

[107] D. Kortenkamp, D. Schreckenghost, and P. Bonasso. Three NASA application domains for integrated planning, scheduling and execution. In AIPS Workshop on Integrating Planning, Scheduling and Execution in Dynamic and Uncertain Environments. June 09 1998.

[108] A. Kossiakov and W. N. Sweet. Systems Engineering Principles and Practice. Wiley Interscience, December 2002.

[109] J. Kramer. Advanced message queuing protocol (amqp). Linux Journal, 2009(187):3, 2009.

[110] KUKA Robotics. Lightweight robot (lwr). KUKA corporate web site, http://www.kuka- robotics.com/germany/en/products/addons/lwr/.

[111] KUKA Robotics. Colleague omnirob is on the road. KUKA corporate web site, http://www.kuka-robotics.com/germany/en/pressevents/news/NN_100615_omniRob.htm, June 2010.

[112] J. Leon, D. Kortenkamp, and D. Schreckenghost. A planning, scheduling and control archi- tecture for advanced life support systems. In NASA Workshop on Planning and Scheduling for Space. NASA, October 1997.

[113] W. C. Lim. Managing Software Reuse. Prentice Hall, July 2004.

Revision 1.0 87 c 2013 by BRICS Team at BRSU Bibliography Bibliography

[114] B. Lussier, R. Chatila, F. Ingrand, M. Killijian, and D. Powell. On fault tolerance and robustness in autonomous systems. In 3rd IARP - IEEE/RAS - EURON Joint Workshop on Technical Challenges for Dependable Robots in Human Environments, (Manchester, UK), 7-9 September 2004.

[115] R. Madhavan, E. Tunstel, and E. Messina. Performance Evaluation and Benchmarking of Intelligent Systems. Springer Publishing Company, Incorporated, 1st edition, 2009.

[116] M. W. Maier. The Art of Systems Architecting. CRC Press, January 2009.

[117] R. C. Martin. Agile Software Development: Principles, Patterns, and Practices. Prentice Hall, October 2002.

[118] N. R. Mehta, N. Medvidovic, and S. Phadke. Towards a taxonomy of software connectors. In ICSE ’00: Proceedings of the 22nd international conference on Software engineering, pages 178–187. ACM, 2000.

[119] G. Melnik and G. Meszaros. Acceptance Test Engineering Guide: Thinking about Accep- tance. Microsoft Corp., release candidate version edition, October 2009.

[120] D. Milicev. Model-Driven Development with Executable UML. Wrox, July 2009.

[121] J. C. Mogul. Brittle metrics in operating systems research. In Proceedings of the The Sev- enth Workshop on Hot Topics in Operating Systems, HOTOS ’99, pages 90–, Washington, DC, USA, 1999. IEEE Computer Society.

[122] nicoh - GitHub. https://github.com/nicoh, 2012.

[123] a. Y. E. P. Ulam, A. Wagner, and R. Arkin. Integrated mission specification and task allocation for robot teams - part 2: Testing and evaluation. Technical report, Georgia Institute of Technology, 2006.

[124] R. Passama and D. Andreu. Towards a Language for Understanding Architectural Choices in Robotics. In ICRA’07: Workshop Software Development and Integration in Robotics "Understanding Robot Software Architectures", Apr. 2007.

[125] O. Pastor and J. C. Molina. Model-Driven Architecture in Practice: A Software Production Environment Based on Conceptual Modelling. Springer, 2010.

[126] S. Petters, D. Thomas, M. Friedmann, and O. von Stryk. Multilevel Testing of Control Software for Teams of Autonomous Mobile Robots. In SIMPAR ’08: Proceedings of the 1st International Conference on Simulation, Modeling, and Programming for Autonomous Robots, volume 5325/2008, pages 183–194. Springer, 2008.

[127] S. L. Pfleeger and J. M. Atlee. Software Engineering: Theory and Practice. Prentice Hall, 4th edition edition, February 2009.

[128] R. Philippsen. ROS roscpp client library API reference. Willow Garage, Inc., Menlo Park, CA, USA, January 2010.

[129] K. Pohl. Requirements Engineering: Fundamentals, Principles, and Techniques. Springer, July 2010.

[130] J. L. Posadas, J. L. Poza, J. E. Simó, G. Benet, and F. Blanes. Agent-based distributed architecture for mobile robot control. Eng. Appl. Artif. Intell., 21:805–823, September 2008.

c 2013 by BRICS Team at BRSU 88 Revision 1.0 Bibliography Bibliography

[131] E. Prassler and K. Nilson. 1,001 robot architectures for 1,001 robots [industrial activities]. Robotics Automation Magazine, IEEE, 16(1):113, March 2009.

[132] R. Pressman. Software Engineering: A Practitioner’s Approach. McGraw-Hill, 7th edition edition, January 2009.

[133] M. Proetzsch, T. Luksch, and K. Berns. Development of complex robotic systems using the behavior-based control architecture ib2c. Robot. Auton. Syst., 58:46–67, January 2010.

[134] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng. Ros: an open-source robot operating system. In Proceedings of the Workshop on Open Source Software held at the International Conference on Robotics and Automation (ICRA)., 2009.

[135] M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger, R. Wheeler, and A. Ng. ROS: an open-source robot operating system. In ICRA, 2009.

[136] M. R. J. Qureshi and S. A. Hussain. A reusable software component-based development process model. Adv. Eng. Softw., 39:88–94, February 2008.

[137] E. D. Richard N. Taylor, Nenad Medvidovic. Software Architecture: Foundations, Theory, and Practice. Addison-Wesley Professional, February 2009.

[138] D. Rosenberg and M. Stephens. Use Case Driven Object Modeling with UML: Theory and Practice. Apress, January 2007.

[139] E. Santana de Almeida, A. Alvaro, D. Lucredio, A. Francisco do Prado, and L. C. Trevelin. Distributed component-based software development: An incremental approach. In Proceed- ings of the 28th Annual International Computer Software and Applications Conference - Volume 01, COMPSAC ’04, pages 4–9, Washington, DC, USA, 2004. IEEE Computer Society.

[140] U. Scheben. Hierarchical composition of industrial components. Sci. Comput. Program., 56:117–139, April 2005.

[141] C. Schlegel. Communication patterns as key towards component interoperability. In D. Bru- gali, editor, Software Engineering for Experimental Robotics, volume 30 of Springer Tracts in Advanced Robotics, pages 183–210. Springer Berlin / Heidelberg, 2007.

[142] C. Schlegel, A. Steck, D. Brugali, and A. Knoll. Design abstraction and processes in robotics: From code-driven to model-driven engineering. In Proceedings of the Inter- national Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), 2010.

[143] C. Schlegel and R. Woerz. Interfacing different layers of a multilayer architecture for sensorimotor systems using the object-oriented framework smartsoft. In Procedings of Third European Workshop on Advanced Mobile Robots (EUROBOT-99). IEEE Explore, september 1999.

[144] D. C. Schmidt. Model-driven engineering. IEEE Computer, 39(2):25–31, February 2006.

[145] D. C. Schmidt. Real-time corba with tao (the ace orb). http://www.cs.wustl.edu/ schmidt/TAO.html, 2010.

[146] K. U. Scholl, J. Albiez, and B. Gassmann. An expandable modular control architecture.

Revision 1.0 89 c 2013 by BRICS Team at BRSU Bibliography Bibliography

[147] D. Schreckenghost, P. Bonasso, D. Kortenkamp, and D. Ryan. Three tier architecture for controlling space life support systems. In IEEE Symposium on Intelligence in Automation and Robotics. IEEE Press, 1998.

[148] K. Schwaber and M. Beedle. Agile software Development with Scrum. Series in Agile Software Development. Prentice Hall, October 2001.

[149] A. Shakhimardanov, N. Hochgeschwender, and G. K. Kraetzschmar. Component models in robotics software. In Proceedings of the Workshop on Performance Metrics for Intelligent Systems, Baltimore, USA., 2010.

[150] A. Shakhimardanov, N. Hochgeschwender, and G. K. Kraetzschmar. Component models in robotics software. In Proceedings of the Workshop on Performance Metrics (PerMIS), 2010.

[151] A. Shakhimardanov and E. Prassler. Comparative evaluation of robotic software integration systems: A case study. In Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on, pages 3031 –3037, Oct. 2007.

[152] J. H. Sharp and S. D. Ryan. A theoretical framework of component-based software devel- opment phases. SIGMIS Database, 41:56–75, February 2010.

[153] J. Shore and Chromatic. The Art of Agile Development. O’Reilly Media, October 2007.

[154] F. R. C. Silva, E. S. Almeida, and S. R. L. Meira. An approach for component testing and its empirical validation. In Proceedings of the 2009 ACM symposium on Applied Computing, SAC ’09, pages 574–581, New York, NY, USA, 2009. ACM.

[155] R. Simmons, R. Goodwin, K. Z. Haigh, S. Koenig, and J. O’Sullivan. A layered architecture for office delivery robots. In W. L. Johnson and B. Hayes-Roth, editors, Proceedings of the First International Conference on Autonomous Agents (Agents’97), pages 245–252, New York, 1997. ACM Press.

[156] R. Smits. OROCOS KDL: Kinematics and Dynamics Library, 2010.

[157] P. Soetens. The OROCOS Component Builder’s Manual, 1.8 edition, 2007.

[158] T. Stahl, M. Voelter, and K. Czarnecki. Model-driven Software Development: Technology, Engineering, Management. Wiley, March 2006.

[159] I. Summerville. Software Engineering. Addison Wesley, 9th editiom edition, March 2010.

[160] I. Summerville and P. Sawyer. Requirements Engineering: A Good Practice Guide. Wiley, April 1997.

[161] C. Szyperski. Component Software: Beyond Object-Oriented Programming. Addison- Wesley Professional, 2 edition, 2002.

[162] The BRICS Project. Hardware building blocks. http://www.best-of- robotics.org/en/research/hardware-building-blocks.html, 2009.

[163] The OROCOS Developer Group. OROCOS: RTT Online API Documentation. KU Leuven, Leuven, Belgium, 2010.

[164] The RoboEarth Project Consortium. What is roboearth? http://www.roboearth.org/, 2009.

c 2013 by BRICS Team at BRSU 90 Revision 1.0 Bibliography Bibliography

[165] The RoSta Project Consortium. RoSta – robot standards and reference architectures. http://www.robot-standards.org/, 2008.

[166] The RTM Project Group. OpenRTM-aist Documents. AIST, Japan, February 2010.

[167] Y. Tsuchiya, M. Mizukawa, T. Suehiro, N. Ando, H. Nakamoto, and A. Ikezoe. Develop- ment of light-weight rt-component (lwrtc) on embedded processor-application to crawler control subsystem in the physical agent system. In SICE-ICASE International Joint Con- ference, pages 2618–2622. 2006.

[168] A. Turetta, G. Casalino, and A. Sorbara. Distributed control architecture for self- reconfigurable manipulators. Int. J. Rob. Res., 27:481–504, March 2008.

[169] H. Utz, S. Sablatnoeg, S. Enderle, and G. Kraetzschmar. MIRO - middleware for mobile robot applications. In IEEE Transactions on Robotics and Automation, volume 18. IEEE Press, August 2002.

[170] H. Utz, S. Sablatnoeg, S. Enderle, and G. Kraetzschmar. MIRO user manual vesion 0.9.4, October 2006.

[171] H. Utz, S. Sablatnög, S. Enderle, and G. K. Kraetzschmar. Miro – middleware for mobile robot applications. IEEE Transactions on Robotics and Automation, Special Issue on Object-Oriented Distributed Control Architectures, 18(4):493–497, August 2002.

[172] R. T. Vaughan, B. P. Gerkey, and A. Howard. On device abstractions for portable, reusable robot code. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), 2003.

[173] V. Verma, A. Jonsson, C. Pasareanu, and M. Iatauro. Universal executive and plexil: Engine and language for robust spacecraft control and operations. In American Institute of Aeronautics and Astronautics Space Conference. 2006.

[174] R. Volpe, I. Nesnas, T. Estlin, D. Mutz, R. Petras, and H. Das. The CLARAty architecture for robotic autonomy. In In Proceedings of the IEEE Aerospace Conference, March 10-17 2001.

[175] O. Šery and F. Plášil. Slicing of component behavior specification with respect to their composition. In Proceedings of the 10th international conference on Component-based soft- ware engineering, CBSE’07, pages 189–202, Berlin, Heidelberg, 2007. Springer-Verlag.

[176] A. J. A. Wang and K. Qian. Component-Oriented Programming. Wiley-Interscience, 1 edition, 2005.

[177] W. Wang and A. K. Mok. A class-based approach to the composition of real-time software components. J. Embedded Comput., 1:3–15, January 2005.

[178] C. S. Wasson. Systems Analysis, Design, and Development: Concepts, Principles, and Practices. Wiley, December 2005.

[179] S. Weerawarana and F. C. et.al. Web Services Platform Architecture: SOAP, WSDL, WS- Policy, WS-Addressing, WS-BPEL, WS-Reliable Messaging and More. Prentice Hall PTR, 2005.

[180] J. Wyatt and N. Hawes. Multiple workspaces as an architecture for cognition. In A. V. Samsonovich, editor, Proceedings of AAAI 2008 Fall Symposium on Biologically Inspired Cognitive Architectures, pages 201 – 206. The AAAI Press, 2008.

Revision 1.0 91 c 2013 by BRICS Team at BRSU Bibliography Bibliography

[181] R. R. Young. The Requirements Engineering Handbook. Artech House Publishers, Novem- ber 2003.

[182] U. Zdun. Tailorable language for behavioral composition and configuration of software components. Comput. Lang. Syst. Struct., 32:56–82, April 2006.

[183] ZeroC. Welcome to ZeroC, the Home of ICE. http://www.zeroc.com/, 2010.

c 2013 by BRICS Team at BRSU 92 Revision 1.0