<<

DEGREE PROJECT IN MECHANICAL ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2016

Flexible Updates of Embedded Systems Using Containers

SANDRA AIDANPÄÄ

ELIN MK NORDMARK

KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF INDUSTRIAL ENGINEERING AND MANAGEMENT

Master of Science Thesis MMK2016:92 MDA 565

Flexible Updates of Embedded Systems Using Containers

Sandra Aidanpa¨a¨ Elin MK Nordmark

KTH Industrial Engineering and Management Approved Examiner Supervisor 2016-06-13 De-Jiu Chen Viacheslav Izosimov Commissioner Contact person Tritech AB Mats Malmberg

Abstract

In this thesis the operating-system-level virtualization solution Docker is investigated in the context of updating an on application level. An updating sequence is designed, modelled and implemented, on which experiments are conducted to measure uptime and current. Within the popular notion of the Internet of Things, more and more things are being connected to the Internet and thereby the possibility of dynamic updates over the Internet is created. Being able to update remotely can be very beneficial, as systems may be costly or unpractical to reach physically for updates. Operating-system-level virtualization, software contain- ers, are a lightweight virtualization solution that can be used for dynamic updating purposes. Virtualization properties, like resource isolation and letting software share hardware capabili- ties are used in determining the architecture. The container architecture used is a microservice architecture, where systems are composed from many smaller, loosely coupled services. The application area for the results of this thesis are start-ups in the Internet of Things field, delimited to low complexity systems such as consumer products. The update regime is created with the properties of microservice architectures in mind, creating a self-propelling, self-testing, scalable and seamless dynamic updating process that can be used for systems of different complexity. The update regime is modeled to give proof of concept and to help design the implementation. The implemented update regime was made on an ARM based single board computer with a -kernel based running Docker. Experiments were then conducted in order to give a clear indication of the behavior of a dynamically updated embedded system. The experiments showed that the update regime can be seamless, meaning that the uptime properties are not affected by this kind of updating. The experiments also showed that no significant changes in current can be noted for container limitations during this kind of update. Master of Science Thesis MMK2016:92 MDA 565

Flexibel uppdatering av inbyggda system med hjalp¨ av containrar

Sandra Aidanpa¨a¨ Elin MK Nordmark KTH Industrial Engineering and Management Approved Examiner Supervisor 2016-06-13 De-Jiu Chen Viacheslav Izosimov Commissioner Contact person Tritech AB Mats Malmberg

Sammafattning

I denna uppsats unders¨oksvirtualiseringsl¨osningenDocker i samband med uppdatering p˚a applikationsniv˚ai ett inbyggt system. En uppdateringsekvens ¨arutformad, modellerad och genomf¨ord,samt experiment genomf¨ordaf¨oratt m¨ataupptid och str¨om. Samh¨alletblir mer och mer uppkopplat, fler och fler saker ¨aranslutna till Internet och ¨armed skapas m¨ojligheterf¨ordynamiska uppdateringar via Internet. Att kunna genomf¨orafj¨arr- uppdateringar kan vara v¨aldigtf¨ordelaktigteftersom det kan vara dyrt eller opraktiskt att fysiskt n˚asystem f¨orprogramuppdateringar. Operativsystemniv˚a-virtualisering,mjukvarucontainrar, ¨aren l¨attviktigvirtualiseringsl¨osningsom kan anv¨andasf¨ordynamiska uppdaterings¨andam˚al. Virtualiseringsegenskaper, s˚asomresursisolering och att programvara delar h˚ardvarufunktioner, anv¨andsf¨oratt best¨ammaarkitekturen. Containerarkitekturen som anv¨ands¨aren mikrotj¨anst- arkitektur, d¨arsystemen ¨aruppbyggda av m˚angamindre, l¨ostkopplade tj¨anster. Anv¨andningsomr˚adetf¨orresultaten av denna avhandling ¨arnystartade f¨oretagsom befinner sig i marknadsomr˚adetf¨ordet uppkopplade samh¨allet,begr¨ansattill system med l˚agkomplexitet s˚asomkonsumentprodukter. Uppdateringssekvensen skapas med egenskaperna hos mikrotj¨anstarkitektureri ˚atanke; en sj¨alv- g˚aende,sj¨alvtestande,skalbar och s¨oml¨osdynamisk uppdateringsprocess, som kan anv¨andas f¨orsystem av olika komplexitet. Uppdateringssekvensen modelleras f¨oratt ge bevis p˚akon- ceptet och f¨oratt underl¨attautformningaen av genomf¨orandet. Den genomf¨ordauppdater- ingssekvensen gjordes p˚aARM-baserad enkortsdator med ett Linux-k¨arnbaserat operativsystem som k¨orDocker. Experiment utf¨ordessedan f¨oratt ge en tydlig indikation p˚abeteendet vid dynamisk uppdatering av ett inbyggt system. Experimenten visade att uppdateringssekvensen kan vara s¨oml¨os,vilket inneb¨aratt upptid- egenskaperna inte p˚averkas av denna typ av uppdatering. Experimenten visade ocks˚aatt inga v¨asentliga f¨or¨andringari str¨omkan noteras f¨orbegr¨ansningarav containern under denna typ av uppdatering. Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law.

- Douglas Hofstadter

Contents

1 Introduction 1 1.1 Background ...... 1 1.2 Objectives ...... 1 1.3 Scope ...... 3 1.4 Method ...... 3 1.5 Sustainability ...... 4 1.5.1 Environmental Sustainability ...... 4 1.5.2 Social Sustainability and Ethics ...... 5 1.5.3 Economical Sustainability ...... 5 1.6 Reading Instructions ...... 5

2 Prestudy 7 2.1 Definitions ...... 7 2.2 Target Market and Parameters of Interest ...... 7 2.2.1 Taxonomy of System Parameters ...... 7 2.2.2 Target Market ...... 8 2.3 Taxonomy of Virtualization ...... 11 2.4 Linux Containers ...... 13 2.4.1 Docker ...... 14 2.4.2 Related Work ...... 17 2.4.3 Docker Compatible Hardware ...... 18 2.5 Remote Update ...... 21 2.5.1 Dynamic Software Updating ...... 22 2.5.2 Microservices Principle ...... 23

3 Implementation 27 3.1 Update Regimes with Containers ...... 27 3.2 Design Guidelines ...... 29 3.3 Overview of the Chosen Updating Regime ...... 30 3.4 Detailed Model ...... 31 3.5 Implementation Setup ...... 34 3.5.1 Platform Specifics ...... 34 3.5.2 Docker ...... 35 3.5.3 Development Environment ...... 35 3.6 Container Implementation ...... 36 3.6.1 Hardware Specific Container ...... 36 3.6.2 Application Containers ...... 37 3.6.3 Container Communication ...... 38

4 Experiment Design 41 4.1 Experiment Method ...... 41 4.1.1 EX 1: Normal Distribution Check ...... 42 4.1.2 EX 2: 2-factor, Full Factorial 2-level Experiment with Center Points . . . 42 4.1.3 EX 3: 2-factor, Full Factorial 3-level Experiment ...... 43 4.2 Analysis ...... 43 4.3 Measurements ...... 43 4.3.1 Actuator Signals ...... 43 4.3.2 Changes in Current ...... 44

5 Result 45 5.1 Uptime ...... 45 5.1.1 EX 1: Normal Distribution Check ...... 45 5.1.2 EX 2: 2-factor, Full Factorial 2-level Experiment with Center Points . . . 47 5.1.3 Comparison to Behavior Without Container ...... 48 5.2 Current Measurements ...... 50 5.2.1 EX 1: Normal Distribution Check ...... 50 5.2.2 EX 2: 2-factor, Full Factorial 2-level Experiment with Center Points . . . 50

6 Conclusion 55

7 Discussion 57 7.1 Result ...... 57 7.2 Update Regime and Model ...... 57 7.3 Implementation ...... 59 7.4 Method ...... 59 7.5 Security ...... 59

8 Future Work 61

9 Bibliographies 63

A Work Division I

B Hardware List III

C Model V

D Code XV D.1 Docker Files ...... XV D.2 GPIO Container ...... XVI D.3 Application Containers ...... XXIV Abbreviations

ADC Analog to Digital Converter API Application Programming Interface App Application appc App (application) Container (project) ARM Advanced RISC Machine BCET Best-Case Execution Time BSD Berkeley Software Distribution CISC Complex Instruction Set Computing CPS Cyber-Physical System CPU CSV Comma Separated Value DSP Digital Signal Processors DSU Dynamic Software Updating ES Embedded System FPGA Field-Programmable Gate Array GNU GNU’s Not Unix GPIO General Purpose Input/Output GPL General Public Licence GPU HDMI High-Definition Multimedia Interface HPC High Performance Computing HTTPS HyperText Transfer Protocol Secure IBM International Business Machines Corporation I/O Input/Output I2C Inter- ID Identifier IoT Internet of Things IP Internet Protocol IT Information Technology KVM Kernel-based Virtual Machine LED Light-Emitting Diode lmctfy let me contain that for you LXC LinuX Containers MIPS without Interlocked Pipeline Stages NPB NAS Parallel Benchmarks OCI Open Container Initiative OS Operating System PC Personal Computer PID Proportional–Integral–Derivative PLC Programmable Logic Controller PMMU Paged PWM Pulse Width Modulation PXZ Parallel XZ RAM Random Access Memory RISC Reduced Instruction Set Computing RMS Root Mean Square ROM Read-Only Memory RQ Research Question SBC Single-Board Computer SD Secure Digital SMPP Short Message Peer-to-Peer SoC System-on-a-Chip STREAM Sustainable Memory Bandwidth in Current High Performance Computers TCP Transmission Control Protocol UDP User Datagram Protocol UHS Ultra High Speed UI User Interface UML Unifed Modeling Language USB Universal Serial Bus VM Virtual Machine VMM Virtual Machine Monitors VPS Virtual Private Server WCET Worst-Case Execution Time

1 Introduction

1.1 Background

This thesis investigates how containers for Linux-like operating systems (built on the Linux kernel), specifically Docker, can be used for updating the software of mechatronical and em- bedded products. Containers, also called operating-system-level virtualization, is a technique for resource isolation that lets software share the hardware capabilities of a computer using virtualization. Containers, however, differ from virtual machines in the way that they do not simulate the hardware, instead everything is done directly on the kernel. The thesis topic was initialized by Tritech Technology AB, which is a consulting firm that specializes in intelligent systems for the connected society. In the field of Cyber-Physical Systems, the trend that can be seen is the notion of the Internet of Things [1, 2], where the Cyber-Physical Systems are not only controlling themselves, but also connected and communicating to other systems. However, there is no clear definition of Internet of Things [3, 4]. Embedded systems within the notion of the Internet of Things can be seen as the result of the increasing computing capabilities of em- bedded systems and the development of the Internet where broadband connectivity is becoming cheaper and more common [2]. There are several benefits of having embedded systems connected. These systems may be impos- sible or unpractical to reach physically for software updates, as they are permanently enclosed or situated in remote places. Accordingly, if a software update fails and the embedded system is stuck in an error state, so that it only can be put back in to service by physical access, it may amount to serious problems and costs. Benefits can be found in enabling software development to continue after launching an embedded product, and allowing it to adopt to changing use or environment. Tritech Technology AB wants to see how containers can be used in a remote update. Cyber-Physical systems often have additional requirements than purely computational systems (e.g. servers). Though they can be expressed similarly, purely computational systems can be migrated, while the computational unit of a Cyber-Physical system might be the only thing connected to the hardware. The effects of utilizing a container for software updates therefore need to be examined at a whole system level. So, there is a need to evaluate the capabilities of these containers. This is to be the start, or first building block, for the company to further investigate or develop systems on this technology. Updating an embedded system may be a critical activity that introduces some challenges. The hardware resources such as memory and CPU are limited, and therefore puts constraints on the software that can be run on the device for e.g. updating. It is also common for an embedded system to have performance requirements that introduces deadlines or timing constraints for processes. In these cases, the tasks of the system must not be impeded by a process such as an software update. Being able to perform a reliable update on an embedded system may be of paramount importance in terms of functionality, performance, safety and security.

1.2 Objectives

The main objective for this thesis is to investigate containers capabilities of performing a seamless software update in an embedded system by utilizing a microservice-approach. This will be done

1 by implementing a software update regime, and then by examining how the performance of a hardware platform as an embedded system will be affected by utilizing this regime. The performance is in the context of an embedded system, where the embedded system is expected to remain connected and perform certain tasks. Focus lies on the embedded system context, investigating effects on its external performance instead of its internal. One case of typical embedded systems was chosen; cyber-physical system prototypes for start-ups in the Internet of Things field. More specifically targeting low complexity systems such as consumer products, typically in home automation or ”smart homes”. Representative examples of the products in mind are power- or water consumption monitoring systems or irrigation systems. The container solution used is Docker, an operating-system level virtualization tool that can be run on a Linux kernel. Experiments will be performed by limiting the resources and investigating how and if this affects the update performance. An updating approach inspired by microservices is going to be designed, modeled and implemented. The effects of utilizing containers when updating will be investigated with regards to some quantitative properties of the system: power consumption and uptime. The research questions that needs to be answered are: RQ 1: What kind of hardware capabilities is needed to utilize Docker containers? RQ 1.1: What CPU capabilities are needed to run Docker containers? RQ 1.2: What memory capabilities are needed to run Docker containers? RQ 2: While updating soft-deadline functionality software in an embedded system utilizing containers as microservices, what is the effect on uptime performance for that function- ality? RQ 3: How are changes in power consumption of the embedded system related to the update? The major contribution this thesis means to make is to propose an updating regime suitable for using with containers as well as to collect metrics and give results regarding the research questions in order for future researchers to be able to benchmark other solutions against them. It also means to give engineers results for them to determine if Docker is the solution that suits their requirements, when comparing their requirements on the parameters investigated in this report to the results given in this report. This work also aims to illuminate a fraction of the possibilities of using containers in embedded systems, especially Docker, and to contribute to evaluating the suitability of utilizing this technique in this way. It wishes to demonstrate relationships in this kind of update regime as well.

2 1.3 Scope

The chosen application domain induces some limitations on the characteristics of the systems on which the update could be implemented. Cyber-physical system prototypes for start-ups in the Internet of Things field implies low-cost, non-specialized hardware. Furthermore, the implementation will foremost focus on uptime, rather than real-time requirements. There will, however, be an ongoing discussion and a readiness to implement for real-time systems as this is often a part of cyber-physical systems. Any other properties that are not related to the market of interest will not be investigated. The updating regime, as implemented in this work, will be representative as a method. It is however not intended to be implemented and used as it is. No effort will be put on ensuring the security of the system, nor to implement any major handling in case of errors due to bugs. The implementation should be viewed as a proof of concept rather than a regime to be set into production. The model of the updating regime is made to give an overall understanding of the logic of the regime, as well as further give a proof of concept. It is not designed to generate code from, neither will it perfectly represent the code written for the containers, due to limitations in the modeling. This thesis does not make any claims to produce reliant absolute data, but is instead investigating relationships, changes and differences. Only the Linux domain will be looked into and no other operative systems, as Linux is ubiquitous in embedded systems and the only OS linking containers and embedded systems together. The thesis will not give a clear indication of whether container virtualization should be used for embedded systems or not, this due to the fact that it is dependent on the implementation and the requirements on the system. It does, however, show that it could be used for this purpose. By uptime, in this thesis, availability and the ability to perform a task without interruption or corruption is meant. Uptime is investigated from the designed update regime’s point of view. The investigation assumes non-harmful programs (applications) and a stable Docker Engine and Linux kernel. The experiments designed in this thesis will be made within the limitations of what is measurable. They are conducted within the objectives of the thesis (1.2 Objectives) and with an embedded systems and cyber-physical systems point-of-view.

1.4 Method

The chosen approach will be a quantitative method. A container is a software implementation, its characteristics intended to be investigated in this thesis can be quantified. For the external validity, a quantitative method is to be preferred. A qualitative method could also be used, however, the point of interest would then be tilted towards usage, or appropriate usage, of a container [5, 6]. Knowledgeable professionals would be interviewed to get an insight into the current perception of containers. A more inductive reasoning could be used, with a literature study in the end. A mixed method could also be used; conclusions would be drawn from the literature studies, earlier evaluations and finally a case study. These approaches with a qualitative method used demands, however, that there is a reasonable amount of knowledgeable professionals and/or literature on the subject already. Knowledgeable professionals seem to be quite hard to find, unfortunately, due to the novelty of the technology in this broader area of cyber-physical systems.

3 In the method used, the research questions will be investigated by conducting experiments. Firstly a prestudy will be conducted to get a clear picture of the operating-system-level virtual- ization (containers) as a whole, what it is and how it works. A taxonomy of virtualization will be presented and related research concerning performance of containers as well. Docker, as a container solution, will then be described in detail. The same will be made within the topic of updates, and remote updates. Furthermore, information on which embedded system hardware that can run a container will be investigated, one of these hardware options will then be utilized for experimenting. However, the external validity of the hardware used for the experiments has to be motivated in order to generalize on the findings [6], thus it has a qualitative aspect. A typical mechatronical system will be implemented on the hardware chosen, motivation for its typicality will also be included in order to generalize on the findings. The measured effect on the system by a container, in terms of the research questions (1.2 Objectives), will be investigated by designing experiments and setting up a test environment (both software and hardware). The experiments will be designed by using the methods described in the e-Handbook of Statistical Methods [7], measuring how the embedded system handles a software update when utilizing a container. Finally, Docker containers will be implemented on the platform in accordance with the experiment design. For analysis, statistics will be used to quantify the data gathered from the experiments. The work division can be seen in Appendix A. The method has two elements that are qualitative; the hardware choice and the mechatronical system design. This is the result of the scope of the thesis, it is neither practical nor possible within the timespan of this thesis to test all possible hardware options. The hardware options chosen will therefore need to be representative in some way. Besides being a hardware that is used for embedded systems, and capable of running a container environment, it will represent hardware used by startup companies or for prototyping. It will be the type of hardware that has a set design and therefore a low initial cost. Embedded hardware can be designed by the companies to fit their specific needs, but this demands that the companies knows exactly what they require for the system. When building prototypes, however, the requirements might be changed within the prototyping process, and so it might be beneficial to postpone the choice of hardware design. Companies in the IoT-startup market (see 1.2 Objectives) might want to skip the hardware design all in all, for cost reasons or for the fact that the development boards already fulfill the requirements and re-design would only be an additional cost. In order to let the choices represent this market, the CPU architecture will be the major factor of design. The findings will therefore be useful for the specific hardware and can be generalized on within the typical type of CPU architecture. The experiment environment will also have to be designed to be representative of a typical embedded- or mechatronical system in the IoT-startup market. So, also here, the external validity will also be rigorously motivated and substantiated by literature.

1.5 Sustainability

1.5.1 Environmental Sustainability

The updating regime presented in this thesis is to be seen as a solution within the notion the Internet of Things. IoT can be used in order to benefit the environment; like Green-IT and energy efficiency in logistics [2]. Besides the positive aspects that IoT might bring with it, there are more direct positive sustainability aspects of updating with software containers. Being able to update cyber-physical systems in a reliable way might lead to less hardware changes, this might save a lot of material and energy losses connected to renewing hardware. It may also

4 lessen the impact of needing to reach embedded systems in remote areas in order to update them. During this thesis work, power overhead of the updates by the containers will be looked into, in order for further environmental evaluation being conducted.

1.5.2 Social Sustainability and Ethics

The Internet of Things (IoT) field may have many different applications that are beneficial for the connected society; it might produce jobs and have a positive impact on the safety end security of systems. It might also be used for the opposite. As a system becomes more and more complicated, the harder it is to test to assure that all safety requirements are always met. The same applies to security. A connected system has the risk of security breaches by unwanted parties. Systems might keep a lot of information about its users, and might also have other sensitive information this is needed to be protected. Another question that should be considered before using or implementing a connected device is whether the provider of the device own or have access to the information the user gives to it. The question of surveillance is still very much on the agenda [8] and before the Internet of Things is readily implemented, the question of integrity should be sorted out.

1.5.3 Economical Sustainability

The Internet of Things (IoT) may also have a positive economic impact due to the creation of more products and the possibility of streamlining services. At the same time, the more machines are able to perform work originally done by a human, the less work there will be for humans. In this case, the fact that the update can be done remotely means that no-one need to be hired to visit all systems and connect to them. The question of what will impact the economy most, the work created by technology or the work overtaken by technology, is something not really considered in a capitalist market.

1.6 Reading Instructions

This thesis report continuously refers, for clarification purposes, to different parts of the report with both chapter enumeration and title. For further details regarding the implementation the report refers to the extensive appendices, referenced at relevant points in the text. The abbreviations-page at the beginning of the report provides the full names of a great number of terms. References are referred to by numbers and presented in the bibliography, in the order they are introduced in the text. The research questions that follow the entire report are denoted with bold style. Bold is also used occasionally to identify different alternatives or parts. Italics are used to emphasize but also to denote programs, file systems, kernel features etc. Commands, functions and system calls are accentuated as command.

5 6 2 Prestudy

In this section the prestudy is presented on which the basis this thesis relies on. The literature that will be used is presented.

2.1 Definitions

Throughout this thesis, different concepts and words are going to be used. In this section definitions of these concepts and words are presented. Within the realm of virtualization, the concept ”host” refers to the physical node or computer. The concept ”guest” refers, on the other hand, to the application or OS or other service that is run within a virtualization, on an abstraction layer. By the word ”container”, throughout this thesis, the operating-system-level virtualization technique, a software container, is meant (see 2.3 Taxonomy of Virtualization), and by Docker is meant the tools gathered under the name Docker to build and deploy containers and software within them (see 2.4.1 Docker). By embedded system (ES), a system that has a specific task and is controlling or monitoring something physical or electrical is usually meant. However, for embedded systems, the definition will become a bit blurry as the update of such a system can change their function and might therefore in some way be able to be regarded as multi-purpose. For this thesis, when talking about embedded systems, the systems regarded will traditionally have specific tasks. As the application area also can be described as cyber- physical system products, and as this area is closely related to both embedded systems and the Internet of Things, the cyber-physical systems domain will be investigated together with that of embedded systems.

2.2 Target Market and Parameters of Interest

2.2.1 Taxonomy of System Parameters

Embedded systems is a wide and loose term, but can shortly be described as ”information processing systems that are embedded into a larger product” as stated by Marwedel [9]. A cyber-physical system is, as described by Lee and Seshia, ”an integration of computation with physical processes whose behavior is defined by both cyber and physical parts of the system” [10]. Lee and Seshia further states that cyber-physical systems (CPS) are composed of physical subsystems, computation and communication, and having both static and dynamic properties. The physical parts of the system can vary significantly as CPS are found in the application range from entertainment and consumer products, via energy and infrastructure to aerospace and automotive industries. The contact to the physical environment is possible through sensors and actuators, which acts as the intermediators between the discrete world of numerical values and the analog world of physical effects [9]. In order to design and analyze these kinds of systems, certain aspects and properties must be regarded as decisive for the system performance. Marwedel lists several characteristics, such as embedded systems having a connection to the physical environment through sensors and actuators, and being dependable in terms of reliability, maintainability, availability, safety and security. He also states efficiency as a characteristic, expressed in the following key metrics for evaluation:

7 • Energy - as many systems are mobile and batteries are a common energy source • Code-size - as hard disc storage often is not available, SoCs are common and therefore all code must be stored in the system with its limited resources • Run-time efficiency - to use the least possible amount of hardware resources and energy consumption, supply voltage and clock frequency should be kept at a minimum, and only components that improve the WCET through e.g. memory management should be kept, etc. • Weight - if it is a portable system, weight is of great importance • Cost - especially the consumer market is characterized by hard competition where low costs puts constraints on hardware components and software development This list can further be extended with these design considerations for networked embedded systems, listed by Gupta et al [11]: • Deployment - where safety, durability and sturdiness are important parameters • Environment interaction - to react to an ever-changing environment and adapt to changes, correct and precise control system parameters concerning e.g. a feedback loop becomes crucial • Life expectancy - low power consumption and fault tolerance • Communication protocol - in a distributed network of nodes, dynamic routing, loss toler- ance and reconfiguration abilities in terms of communication may be needed • Reconfigurability - making it possible to adjust functionality and parameters after deploy- ment • Security - for which computationally light-weight security protocols are important • Operating system - the hardware constraints may require an optimized OS which in turn sets terms for the software running on the system Other characteristics are real-time constraints. Many embedded systems perform in real-time, where tasks not only have to be delivered correctly, but a crucial aspect in the performance is that they are delivered on time. The real-time behavior can affect both the quality and the safety of the system service. Here it is possible to distinguish between soft and hard deadlines, as well as average performance and corner cases. Thiele and Wandeler states that most often the timing behavior of the system can be described by the time interval between two specific events [11]. These can be denoted as arrival and finishing events, e.g. a sensor input, an arriving packet, the instantiation and the finishing of a task. This applies to both external communication as well as the internal task-handling of the . Keywords in this area are execution time and end-to-end timing. System parameters affecting these may include, and are not limited to: response time, end-to-end delay, throughput, WCET, BCET, upper and lower bounds, jitter and deadlines.

2.2.2 Target Market

The Internet of Things environment possess a high degree of both hardware and software hetero- genity, concerning both functionality and network protocols. Because of this, recent suggestions

8 point towards moving away from these specificities to facilitate interoperability and to support application development. Platforms that aim to achieve this should ”(i) efficiently support the heterogeneity and dynamics of IoT environments; (ii) provide abstractions over physical devices and services to applications and users; (iii) provide device management and discovery mecha- nisms; (iv) allow connecting these elements through the network; (v) manage large volumes of data; and (vi) address security and scalability issues” as stated by Cavalcante et al. [12]. How- ever, they also state that no complete consensus exist on which functional and non-functional properties that should be possessed by an IoT platform. According to Kanuparthi, Karri and Addepalli [13], a typical Internet of Things (IoT) system architecture contains: • Sensing and data collection, (sensors) • Local embedded processing, at the node and the gateway • Activating devices based on commands sent from the nodes, (actuators) • Wired and/or wireless communication (low-power wireless protocols) • Automation (software) • Remote processing (federated compute-network-storage infrastructure) They describe the architecture as consisting of several tiers, from low processing capability sensors through processing nodes which also have limited storage, processing and power to gateway Internet interfaces with good processing power and memory. Konieczek et al. point out that as most IoT devices observe and manipulate their environment through sensors and actuators (i.e. acts as a cyber-physical system), and as the physical world is continuous, the execution time of the applications must conform to certain boundaries [14]. These are referred to as real-time requirements, as described earlier. Three levels of real-time applications can be discerned, depending on the properties of the deadlines. Hard real-time applications cannot miss any deadline, or system failure and dire consequences will occur. Firm real-time applications can be described as the information delivered after a deadline having no value, while soft real-time applications’ delivered information rather decrease in value with time passed since the violated deadline, this is shown in Figure 1. For this kind of applications occasional violations of the deadlines may be acceptable behavior, and having only task and operating system call priorities may even be sufficient as soft deadline scheduling frequently is based on extensions to standard operating systems [9]. In the field of Internet of Things, and in particular that of the smart home, sensors and actua- tors could for example be: motion, light, pressure, temperature, sound, distance and humidity sensors, and electric motors, LEDs, cameras, speakers and valves. When connected to e.g. a motor, position and velocity may be parameters that are added to the set of system parameters worth analyzing. When it comes to computation, CPU properties and memory management are vital in deciding the performance of the system. For the processor clock speed and workload are significant, and for memory it is size, type and usage that are highly interesting. A typical IoT smart home system might for example be a monitoring system, such as a power or water consumption monitoring system. This kind of systems consists of monitoring sensors, collecting information to a processing unit which then communicates information to the outside world or possibly also performing a task with an actuator. Communication can typically be performed through a display, some LEDs or through more complex data being sent to a mobile phone or computer through the Internet. A typical actuating task may be rotating a servo, starting

9 Figure 1: This figure shows the difference between hard- and soft-deadline real-time systems, where an example function is shown for the loss of value for the information in a soft real-time system.

Figure 2: Distribution example of the response time for hard and soft real- time where the deadline represents the time when the system is viewed as non- or very slow-responsive.

10 a timer or flicking a switch. Other smart home implementations might include switching on and off lights or controlling curtains, where the system might be triggered by sensors or by an internal system clock. It may also be controlled by another device, such as a mobile phone or a computer. There are no limitations to the types of systems that may be included in smart homes. A monitoring system relies on some amount of uptime to be able to produce accurate information; the system might be a polling system with hard or soft deadlines. In this thesis, soft deadlines are foremost of interest (see 1.3 Scope). Different periods between the polling or actuating may exist. For a monitoring system, updates every minute or every 10 minutes may be of interest. For curtains and light, some responsiveness is to be preferred if the system is controlled by a user. This responsiveness should be in the order of seconds but can still be a soft-deadline system, with a distribution of the response time depending on communication and computation. For these types of systems, having a system performing as fast as possible is often good enough for the user, as long as the peaks of the response time is not outside a reasonable interval. This interval depends on the system and requirements set on it, an example of the distribution can be seen in Figure 2.

2.3 Taxonomy of Virtualization

”Virtualization is a term that refers to the abstraction of computer resources” [15]. What this means, in short, is that some part of the computer is put on a level of abstraction for some other part of the computer in order to simulate the environment to run applications on. The computer can virtualize that it has multiple or other hardware or software components running. Virtualization can be used for a number of reasons, Sahoo, Mohapatra and Lath [15] lists the main reasons as: • Resource sharing - Sharing hardware capabilities of the host as memory, disk and network. • Isolation - Virtual machines can be isolated from each other, they are unaware of each other and are inherently limited to effect each other. There are several types, or levels, of virtualization. Sahoo, Mohapatra and Lath [15] lists them as: A. Full Virtualization B. Hardware-Layer Virtualization C. Para Virtualization D. OS-Layer Virtualization E. Application Virtualization F. Resource Virtualization G. Storage Virtualization Full virtualization is when the level of virtualization is so high that any application or op- erating system installed within the virtual environment, need no modification or needs not to be made aware that it is running within a virtual environment. This creates a very isolated environment but can lessen the performance by 30% [15]. This type of virtualization can be

11 binary translation; it emulates one processor architecture over another processor architecture, it is a complete emulation of an instruction set. Also in this category, some virtual machines are placed, which are created by virtual machine monitors VMMs. VMMs were defined by Popek and Goldberg in 1974 as having three characteristics: ”First, the VMM provides an environ- ment for programs which is essentially identical with the original machine; second, programs run in this environment show at worst only minor decreases in speed; and last, the VMM is in complete control of system resources” [16]. Popek and Goldberg also claims that the demand on efficiency removes emulators from the VM category. A VMM is also often called a hypervisor in the literature, no clear distinction can be found between the two and is therefore deemed equivalent.

Hardware-layer virtualization, or hardware assisted virtualization, is when a VMM (Virtual Machine Monitor) runs directly on hardware and the guests run their own OS. It has low overhead and is sometimes called native or bare metal-hypervisors. Example solutions in this category are Kernel-based Virtual Machine (KVM), Xen, Hyper-V, and VMware products [17].

Para virtualization is when the interface to the hardware is modified to control what the guest can do. Here the guest OS must be modified in order to use this interface and guest machines know that they are running in a virtualized environment. It is simple and lightweight which allows Para virtualization to achieve performance closer to non-virtualized hardware. Example solutions in this category are Denali, Xen, and Hyper-V [17].

OS-layer virtualization, also called containers, is what is going to be the virtualization technique this thesis has its emphasis on. It has its virtualization layer on top of a kernel or an OS. Here it is not the hardware, but the host OS that is the one being virtualized. This way of virtualizing is less performance draining than the full virtualization. Resources like memory, CPU and disk space can be reassigned both at creation and at runtime. It does however not have the same level of isolation as the full virtualization. This is the type of virtualization that is the focus of this thesis.

In Application virtualization, the user runs an application without installing it. Instead a small virtual environment with only the resources needed for the application to execute is run. This virtualization is often used for isolation to see if an application is safe to install (safe meaning it fulfills requirements on safety and security). Isolation for this purpose can also be called sandboxing.

Resource virtualization can be adding an abstraction level in order to make applications see isolated parts of a distributed system (memory, computing capabilities) as one whole. Or, the reverse, in order to create partitioning and isolation. Storage virtualization is within this category where scattered memory is seen as one whole memory ”pool” [15].

Rodr´ıguez-Haroet al. [17] couples Operating System-level virtualization with Para-virtualization because the two techniques are based on execution of modified guest OS’s. They also couple binary translation virtualization with hardware assisted virtualization because they both are based on execution of unmodified guest OS’s.

Other benefits that Sahoo, Mohapatra and Lath [15] list include migration properties; if the application is in a virtualization layer, it is easier to migrate onto other hardware, perhaps with more hardware capability, and can therefore give a more reliable execution and benefits like availability, flexibility, scalability, and load balancing over a distributed system. Other benefits listed are the ability to fully utilize hardware and the cost-efficiency connected to the economics

12 of scale and labour overhead among others. Security is also mentioned, due to separation and isolation, if one service is compromised, the other services are unaffected. The classification made in this section is not the only classification of virtualization that can be made. For hypervisors, they have been divided into 2 classes in some cases [18]: • Type-1 - bare metal hypervisors • Type-2 - hosted hypervisors Where Type-1 refers to the types run directly on the hardware and Type-2 refers to the type that is hosted by an operating system.

2.4 Linux Containers

In this thesis, we concentrate on operating-system-level virtualization for Linux kernel based operating systems, specifically Docker (see 2.4.1 Docker). There are, however, solutions for other operating systems, such as Spoon Containers and VMware ThinApp for Windows [19, 20]. Unix is an operating system developed by Ken Thompson and Dennis Ritchie (and others) in the 1970’s, developed for multitasking and multi-user applications [21]. Since this, there has been many derivations of the Unix operating system, now Unix has been adopted to mean the Single UNIX Specification describing the qualities that Unix aimed at. Operating systems derived from UNIX are called Unix-like operating systems. The containers for Unix-like operating systems are all in some way based on chroot [22] which is a functionality or process for most Unix-like operating systems, where the root directory is virtualized for a process (and its children). This virtualized environment is sometimes called a ”jail”. It is important, however, to note that in contrast to what it implies, there are ways for processes to break out of this virtualized root directory. The chroot mechanism was introduced in the late 1970’s for the Unix operating system [23]. There has been tools developed for the same purpose since then like systemd-nspawn for Linux [24]. The Linux kernel was created by Linus Torvalds and aims at the Single Unix Specification and was developed to be a clone of Unix [25]. In contrast to other operating systems that has a container implementation (Solaris, Windows, FreeBSD) [26], it is open source, widely used, and used in embedded systems [27]. The Linux kernel has a built in container solution called LXC (LinuX Containers) [28], that is built upon the cgroup (control group-) feature of the kernel that offers isolation and limitation of a group of processes (container) regarding CPU, memory etc. It also utilizes namespaces for further isolation for the processes as well as chroots and other kernel features. Linux-VServer is similar to LXC, it uses chroot and some other standard tools for virtualization [29]. It is not, however, embedded in the Linux Kernel distributions. Linux- VServer creates what is called Virtual Private Servers (VPS), containers, which is designed for server applications and to fulfill server demands on isolation and resource management. The last logged change in their releases is from 2008, so there seem to be no major development of this container solution. Another solution is Virtuozzo Containers, which is a solution package of containers specifically for servers and cloud solutions [30]. It is a proprietary software that creates containers for either Linux or Windows. The basis for creating the Virtuozzo containers, however, is OpenVZ [31] which is a under GNU GPL license. It should be noted that as of March 2015, Virtuozzo containers support Docker containers to be run within, noting the Docker capabilities

13 of simplifying the container management and deployment, but, adding security with Virtuozzo containers as they have more security implementation [30]. Let me contain that for you, shortened lmctfy [32], is the open source version of Googles container stack [33]. Googles container is a container solution also building upon cgroups in the Linux kernel but is designed for Google and Googles needs of scalability and concurrency, due to the vast amount of processes within their servers [34]. Lmctfy has however been abandoned for Dockers libcontainer [32] (now called runC). The operating system developers behind Core OS, an OS also based on the Linux kernel, started appc [35]. Appc, short for App (application) Container, was a project in which to define a standard for application containers. Core OS also developed their own container mechanism in December of 2014. They named the runtime rkt (pronounced ”rock-it”) [36], based on the appc standard. The container mechanism that will be used in this thesis is Docker [37]. Originally, Docker used LXC to create containers, but moved over to their own container mechanism, libcontainer, in the 0.9 release 2014 [38]. In 2015 Docker announced that libcontainer would be the foundation for the Open Container Initiative (OCI) [39, 40], joining with appc [41] for development. The objective is to make an industry standard for containers and a number of actors have joined in: Apcera, AWS, Cisco, EMC, Fujitsu Limited , Google, Goldman Sachs, HP, Huawei, IBM, , Joyent, Pivotal, the Linux Foundation, Mesosphere, , Rancher Labs, Red Hat, and VMware among them. Core OS had created appc and rkt as an alternative to Docker, still utilizing namespaces and cgroups, but, developing further for containers where they thought Docker lacked [42]. They, at the time, saw Docker becoming more of a platform rather than a Container development, the OCI means to again strive for a standard. The container mechanism developed through the Open Container Initiative is runC [43], libcontainer as a project has been stopped. One of the motivations for using Docker, an thereby runC, as the container platform is the fact that other initiatives have been abandoned for the Docker platform and its mechanisms, like lmctfy. The mechanism, runC, is also a product of the Open Container Initiative, which means that the container will follow a standard supported by both the Linux Foundation and Microsoft, indicating that this standard will be the global standard for all operating-system-level virtualizations in the future. Docker as a platform also has tools simplifying deployment and other features presented in the next section.

2.4.1 Docker

Docker is an open-source container technology, wrapped up in a ”developer workflow” [44] that aids developers in building containers with applications and sharing those in their team. It was developed in order to provide a technology where code can be easily shared and portable through the entire development process to production, and that would run in production the same way it ran in development. Docker containers are described on the Docker web page [37] to ”wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries”. This way it is enabled to run the same way regardless of environment, and applications can be composed avoiding environment inconsistency. The Docker architecture is of client-server type and has a daemon running on the host that builds, runs and distributes containers. The user interacts with the daemon through the client (docker

14 binary), the primary user interface, which accepts commands from the user and communicates with the daemon via sockets or a RESTful API. The client and daemon can run on the same system, or with the daemon running on a remote host. Dockers containers are created from images, which are read-only templates that can contain applications and even operating systems, settings for the environment, among other things. Images can be built, updated or downloaded. With Docker Hub comes a public registry from which images can be downloaded. The Docker containers hold everything that is needed to run an application. Roughly, the Docker core functionality is: building images to hold applications, creating containers from the images to run applications, and sharing images through a registry. Docker images are made up of layers, combined together by UnionFS. This file system makes it possible to layer branches (files and directories of separate file systems) transparently to form a single file system. The images are updated by adding a new layer, so that there is no need to replace and rebuild the whole image. All images are built from base images, upon which layers are added using instructions, including running commands, adding files or directories or creating environment variables. The instructions used to create new layers on an image are stored in a Dockerfile, so that Docker can read this file, execute the instructions and return a new image when requested. Every container is created from an image, which contains information about what the container consists of, what processes to run when launching the container, environment settings and other configuration data. Docker is adding a read-write layer to the image when running a container, in which the users application can be run. The container holds an OS, user-added files and meta data. To run a container, the Docker client communicates the image to run the container from and a command to run inside the container when launching to the daemon through the docker binary or the API. For example, when creating a container from an base image and starting a Bash shell inside it, the following takes place: 1. Docker pulls an image from the registry, layer by layer, if not already present at host. 2. Creates a container from the image. 3. Allocates file system, adds a read-write layer to the image. 4. Creates a network interface to let the container talk to the host. 5. Attaches an available IP address. 6. Runs an application. 7. Connects and logs standard inputs, outputs and errors. The container technology depends mostly on two kinds of kernel features, called namespaces and control groups. When running a container, Docker creates a set of namespaces for that container in order to create the isolated workspace. This means that each aspect of the container runs isolated in its own namespace. The control groups control the sharing of hardware resources to and between containers, if necessary putting constraints on resource access to make sure the containers can run successfully in isolation. Docker uses union file systems as the building blocks to create containers in a layered and light-weight way. The combination of namespaces, control groups and union file systems make up a wrapper Docker calls a container format. The Docker daemon can be bound to three different kinds of sockets: a (non-networked) Unix domain socket, a TCP socket, or a Systemd FD socket. By default, the daemon takes requests

15 from the Unix socket, which requires root permission. When accessing the daemon remotely, i.e. the daemon and Docker host is on another system than the client, the TCP socket has to be enabled for root access on the host. This however creates un-encrypted and un-authenticated direct access to the daemon, so that a HTTPS encrypted socket or a secure web proxy must be used to secure the communication. Matters of network security are however outside the scope of this study, and no effort will be put into securing the socket as this is of no concern for the experiments conducted. A Docker image is created by writing instructions in a Dockerfile, each instruction creating and adding a layer to the local image cache. When creating an image, an already existing base image is often used as foundation. After this, instructions such as run commands can be written to extend or modify the image. For every instruction, a container will be run from the existing image, within the running container the instruction is executed, and finally the container is stopped, committed (creating a new image with a new ID) and removed. For a following instruction, a new container is started from the last saved image committed by the previous container, and the process is repeated. In this way, all images are made up of layers, but every layer is an image in itself, or rather a collection of images. Therefore a container can be created from any layer with an ID, and layers can also be re-used for building different images. When adding things to an existing image a new image is created, but instead of rebuilding the entire new image it instead keeps a reference to a single instance of the starting image saved in the cache. By this, image layers do not need to exist more than once in the local file system, and do not need to be pulled more than once [45]. This also leads to a much faster image build when a Dockerfile is re-used. Image trees may exist in the cache, forming an image relations history with parents and children. The Dockerfile is processed by scanning all children of the parent image to see if one matching the current instruction already exists in cache. If it does, Docker jumps to the next instruction, if it does not, a new image is created. Because of this, if an instruction in the Dockerfile is changed all of the following instructions in the file are invalidated in the cache, and the children of the altered instruction/layer/image are invalidated. From this it is possible to conclude that in order to re-use as much as possible from an already-in-cache image and have a fast image build, changes in layers should be kept to the latest instructions. When creating a container from an image, all the images layers are being merged, creating a significantly smaller file. There are two approaches to update an image, according to Docker’s documentation [37]: 1. Update a container created from an image and commit the results to a new image. — Run container on the image that should be updated. In the container, make desired changes. Exit the container. Use docker commit to copy the container and commit it to a new image that is created and written to a new user. Run a new container on the new image. 2. Use a Dockerfile to specify instructions to create a new image. — Use docker build to build images: create a Dockerfile. State a source for the base image. If wanted, add RUN instructions to the Dockerfile to execute commands in the image for e.g. updating or installing packages, thus adding own new layers to the base image. Use the docker build command and assign the image a user, specify location etc. Run a container from the new image. When building a similar image (same base image, but new added build instructions in the Dockerfile) Docker will reuse the layers already built for this image and only create the new

16 build instruction layer from scratch. Reusing layers like this makes image building fast. The image’s history will be saved containing all its building steps so that rollback is possible.[37]

2.4.2 Related Work

Raho, Spyridakis, Paolino and Raho [46] compares the Docker solution with a hardware sup- ported hypervisor (KVM and Xen) on an ARMv7 development board (Arndale). KVM and Xen are full virtualization techniques, hypervisors, that have hardware support in the ARMv7 pro- cessor. They the fact that hypervisors provide high isolation given by hardware extensions. They use Hackbench to measure time it takes to send data between schedulable entities, IOzone to measure file I/O performance for read/write operations and Netperf to measure networking activity (unidirectional throughput and end-to-end latency). They also use -Unixbench that measures a number of aspects; CPU performance (instructions per unit time in integer calcula- tions), speed of floatpoint operations, excel calls per unit time, pipe throughput and pipe-based context switching, memory bandwidth performance (when creating processes), shell script (start and reap per unit time) and system call overhead. The results from Hackbench showed a small or negligible overhead for all techniques, IOzone showed lower overhead for Docker when it came to the write operation, all other operations gave better scores for Xen and KVM due to their caching mechanisms. Netperf showed results close to non-virtualization operation with packets of 64 or more, Xen outperformed when it came to smaller packets. Byte-Unixbench gave similar results for all experiments. So, in conclusion, the performance overhead for all three virtualization techniques was very small.

Felter, Ferreira, Rajamony and Rubio [47] also compares Docker with KVM. They use micro- benchmarks to measure CPU, memory, network, and storage overhead. Their setup consists of an IBM System x3650 M4 server with two 2.4-3.0 GHz Intel Sandy Bridge-EP Xeon E5-2665 (64-bit) processors. They measure two real server applications as well: Redis and MySQL. The results conclude that containers are equal or better in almost all aspects. The overhead for both techniques is almost none considering CPU and memory usage; they do, however, impact I/O and OS interaction. They claim this to be because of extra cycles for each I/O operation, meaning that small I/Os suffer much more than large ones. They specifically measure throughput for PXZ data compression where Docker performs close to an non-virtualized environment. KVM, however, is 22% slower. They also measure performance when solving dense system of linear equations with Linpack, since this is have very little to do with the OS, Docker performs close to the non-virtualized environment. KVM, however, needs to be tuned in order for it to give performance in par with a non-virtualized environment. They also measure using the STREAM benchmark program, which measures sustainable memory bandwidth. For STREAM, the non- virtualized and the virtualized environments perform very close to each other. The same result can be seen when testing random memory access with RandomAccess. They also measure Network bandwidth utilizing nuttcp, here there is a difference in time between transmitting and receiving where KVM performs close to native when transmitting, but Docker demands some more CPU cycles. When receiving however, KVM is slower whereas Docker demands more or less the same amount of CPU cycles. The network latency is also measured, here they have used netperf. The results show that Docker takes almost twice the time as the native system, and KVM slightly less than Docker. Block I/O is measured using fio where non-virtualized and the virtualized environments perform similarly when measuring sequential- read and write. When measuring random- read and write there is however an overhead for KVM.

17 Xavier et al [48] makes their research tilted towards HPC (High Performance Computing) with an experiment setup of four identical Dell PowerEdge R610 with two 2.27GHz Intel Xeon E5520 processors and one NetXtreme II BCM5709 Gigabit adapter. Because they are looking into HPC, they use the NAS Parallel Benchmarks (NPB), they also use the Isolation Benchmark Suite as well as Linpack, STREAM, IOzone, and NetPIPE (network performance). The con- tainer technologies tested are Linux VServer, OpenVZ and LXC, against Xen. The conclusion is that all container-based systems have a near-native performance of CPU, memory, disk and network. The resource management implementation show poor isolation and security, however.

Joy [49] makes a performance comparison between Linux containers and virtual machines looking specifically at performance and scalability. The experiments are conducted on an AWS ec2 cloud and two physical servers. The comparison made was between ec2 virtual machines and Docker on physical servers. The application performance comparison is done with and Jmeter, and the scalability comparison is made with upscaling the virtual machine until it reaches maximum CPU load and utilizing Kubernetes clustering tool for Docker. Joy concludes that containers outperform virtual machines in terms of performance and scalability.

This thesis will investigate operating-system-level virtualization from an embedded system point of view, and look more into metrics measurable form outside rather than benchmark testing like [46, 47]. This gives this thesis the possibility for engineers, together with the related work presented in this section, to better determine the suitability of using Docker to update with when implementing a similar solution later described in this thesis.

2.4.3 Docker Compatible Hardware

There is a number of operating systems the Docker containers can run on: [37] (”Install Docker Engine”) Linux-kernel based, Windows and OS X. Linux-based operating systems are widely used for embedded systems and will therefore be used in this thesis. It is also free and open- source, making it easier to use. For using a Linux kernel based operating system, the Docker Engine has to have a Linux Kernel (3.10 or later) with an adequate cgroupfs hierarchy and some other utilities and programs installed (Git, iptables, procps, XZ Utils). So, the size of the system can be anywhere in between a large mainframe and a small embedded system. Containers can also be slimmed [50] for more efficient memory usage. The same logic can be applied to other types of containers, since the container creating mechanisms utilizes built in programs in the kernel, such as cgroups and namespaces. The requirements for the operating system to run containers can be very small and mostly dependent on what is to run within the container. The hardware requirements for running a Linux-kernel based operating system differs between the operating systems, the kernel itself can also be slimmed down [51] so there is no clear requirements for the memory capabilities.

So, answering the first research question, RQ 1 (see 1.2 Objectives), nothing can generally be said about the hardware capabilities needed to run a container. Continuing with RQ 1.1 and 1.2, the CPU and memory capabilities needed are in their entirety dependent on what kind of applications that is to be run within the containers, as well as the settings you make when building the Linux kernel and installing container mechanism (if it is not included in the kernel). It has already been shown in section 2.4.2 Related Work, that Docker and other container mechanisms have been known to show very little overhead. Docker had a challenge in 2015, where the premises was to build as many Docker containers as possible on a

18 [52], this showed that a) Docker do not really know how small their solution can be made and b) the question of hardware capabilities is relative to what is to be accomplished within. Some operating systems have instructions on the Docker website, and one can therefore conclude that Docker must be able to run without major fault on them [37]. The listed Linux kernel based operating systems are , CentOS, CRUX Linux, , Fedora, FrugalWare, Gentoo, Oracle Linux, Red Hat Enterprise Linux, openSUSE, SUSE Linux Enterprise and Ubuntu. Of these, the ones that are specifically designed for embedded systems are Arch Linux, Debian and Gentoo. The processor architectures the Linux kernel can run on can be found in the release notes of the latest release (4) [25]. The release notes lists a number of processor architectures it can run on, also implying it can be run on more general-purpose 32- or 64-bit processor architectures if they have a paged memory management unit (PMMU) and a proper setup of the GNU C compiler (gcc). The architectures for the operating systems listed on the Docker website [37] can be seen in Table 1. Of the architectures listed, some are used (and designed) for high performance embed- ded systems like workstations, gateways and/or servers, like: MIPS, z/Architecture and IBM Power architectures. This compilation will concentrate on hardware that suit the implemen- tation presented in 1.3 Scope and further described in 2.2 Target Market and Parameters of Interest; ARM- and architectures. Where the ARM architectures, 32 bit and 64 bit have RISC instruction set design. The x86 architectures usually refers to the 32-bit versions (IA-32) but some has 64 bit enabled (called x86-64). The x86 architecture has more towards a CISC instruction set design.

Arch Linux ARM architectures, IA-32, x86-64 CentOS x86-64 CRUX Linux x86-64 Debian ARM architectures, i686, IA-32, IA-64, IBM Power architectures, x86-64, MIPS architectures, z/Architecture Fedora ARM architectures, IBM Power architectures, x86-64, MIPS architectures, z/Architecture FrugalWare i686, x86-64 Gentoo ARM architectures, DEC Alpha, IA-32, IA-64, IBM Power architectures, x86-64, PA-RISC, SPARC 64, Motorola 68000 Oracle Linux IA-32, x86-64 Red Hat Enterprise Linux IA-32, IBM Power architectures, x86-64, S/390, z/Architecture openSUSE (SUSE) IA-32, x86-64 Ubuntu ARM architectures, IA-32, IBM Power architectures, x86-64

Table 1: The listed architectures on the Docker website [37] and the processor architectures

Hardware for embedded systems can be designed into the smallest detail by the engineer. In this thesis, the market which this study is investigating is prototype-making with low start-up costs and short time to market. Ergo, the hardware must be single-board computers (SBC) [53] or single-board or similar that are viable options for the IoT-startup market that is the regarded implementation area (see 2.2 Target Market and Parameters of Interest). Linux can be run on FPGAs [51] programmed to look like CPU:s, but since this is largely a

19 work-around to use Linux, in this compilation only hardware based on CPU:s will be regarded.

There is a great number of vendors for SBCs (and similar) with ARM or x86 architectures that can run one of the listed operating systems. However, it often needs to have a patched or a specifically built Linux kernel. One of the most popular architectures for embedded systems of the implementation type this thesis looks at, is ARM. Ubuntu has a 64-bit ARMv8 system server installation of their operating system [54], the embedded systems of this architecture often use a 32 bits, however. Gentoo has a number of 32-bit installations and Debian has both 32- and 64-bit installations of ARM. There is also the option of customizing a Linux-kernel and a Linux- based operating system. Linux Yocto is a project that has a Build System for creating custom Linux-based systems. The benefit of Yocto is that the kernel can be built and streamlined for the application. But, the downside is that it needs to be rebuilt when adding new software resources to the operating system and therefore is not suitable to develop directly on, since it can be hard to know in development everything needed in advance.

There are many boards that are designed specifically for embedded systems, often with real time support, that has ARM architectures of their CPUs and that are able to run Linux. They are, however, often very expensive and delivered with a pre-ordered real-time operating system and features that need to be customized. These factors makes them not suitable for start-ups within our scope (1.3 Scope), as the initial cost increases. Hardware options considered are SBCs that has:

• an ARM, IE-32 or x86-64 processor

• wifi or ethernet connection

• general purpose I/O (GPIO)

Where the the possibility of Internet connection is required for the Internet in the Internet of Things, and GPIOs are often required for the system to be connected to an actuator or sensor in the Internet of Things. The embedded systems that fulfills these requirements include the ARM based boards Raspberry Pi, Orange Pi, , Beagle Boards, ODROID, Wandboard and among many others. For x86 IA-32 there is , kit, Versa Logic Newt (VL-EPIC-17) and Minnow Board, among others. And for the x86-64, VersaLogic Iguana, VIA EPIA P910 could fulfill all requirements. A longer list can be found in Appendix B, this list does not make any claim to be entirely correct (due to the fact that the information found was on the Internet and not always the most reliable sources) nor complete. Many SBCs of the x86 architecture does not have GPIO, they are designed more for being workstations or servers. Some include a Graphics Processing Unit (GPU), this unit is designed for large calculations related to graphics. FPGAs can not only be used alone in an embedded system, but also used for performance enhancement together with a CPU. It is connected to the CPU, that runs the operating system, and can be used for computing acceleration, often offering parallel execution. For this thesis however, FPGA and GPU will not be used as it would demand platform specific configuration and is not standard for all SBCs nor always needed for the solutions within the scope. Many boards also incorporate a micro-controller for the peripherals. Processors or co- processing units may also be digital signal processors, DSPs, these processors are designed for filtering or compressing digital signal inputs. DSPs will not be considered within this scope either since it is neither standard for SBC for the purposes stated nor overtaking general-purpose CPUs or micro-controllers but instead seem to be co-processing, just as a GPU or FPGA.

The SBCs within the scope of this thesis often have CPUs performing well when it comes to

20 performance per unit power. ARM has always performed well in this regard, and the Intel x86 CPUs that has recently entered the market was designed to be energy efficient. The number of cores are growing for SBCs. Right now, ARM has reached octo-core and just released a new quad-core CPU. The increasing number of cores can be opted out for a FPGA or GPU instead for parallelism. More x86-64- and 64-bit ARM processor based SBCs have also been released in later years, which might indicate that this is where the SBCs that can carry Linux will move next, apart form increasing number of cores. Both 64-bit and core increasing might also just be a way for the producers to retrieve more market share by covering more requirements, not abandoning the 32-bit or fewer core architectures. For the experiments, the architecture used will be ARM, since it has shown to be the most common architecture for embedded systems. The final delimiting factor when deciding experi- ment hardware is community support; if there is a strong on-line community for a hardware, a start-up might be more inclined to choose this hardware. The hardware might have been shown to work for similar implementations and help can be found within the community. On the Linux Foundation news page, the 10th of June 2015, there was a reader survey asking which was their SBC preference. This article does neither employ a scientific method or contain all SBCs or the SBCs released after this date, however, it gives an indication of which ones that are more popular [55]. Raspberry Pi (B versions), Beagle Bone Black, Odroid, Dragon Board, Parallella and Intel Edison were among the winners. On-line communities and open Github projects can be found for all of these, also indicating popularity. For choosing the ARM hardware, popularity was seen as a big factor as well as earlier proof-of-concept of Docker implementations, so the board used will be Raspberry Pi 2.

2.5 Remote Update

Updating remotely refers to updating without physical access to the hardware. The updating is done by some sort of communication with the device, for example over Internet. The reasons why this approach may be beneficial to an embedded system are described in 1.1 Background. The methods for updating remotely can be the same as the one for updating with the physical access to the hardware. The difference being only that, all parts of the update must be possible to do without this physical access, no further distinction can be found in the literature. There are however further considerations with remote update, such as the mitigation of the risks connected to updating that can impact the remote access and leave the system inaccessible. The main two approaches to updating software of embedded systems are: 1. Taking the system offline and updating with a patch as an executable file (modifying or replacing the binary file), or by replacing the source code directly. The system is unable to perform its tasks during this procedure, until it is connected online again. 2. Using hot patching/hot-swapping, live patching or dynamic software updating, i.e. meth- ods to update without shutting down or restarting the system. Dynamic software updating (DSU) is as of today not commonly implemented in the industry, but exist as a growing field of research. One problem with this situation is that the existing DSU solutions are often ad-hoc procedures designed to manage hot updates for a very specific case of hardware, OS, scheduling etc. The focus of the thesis is to investigate an alternative way of updating using

21 new technology. In order to be able to evaluate the new method, existing practices and methods for updating the software of embedded systems must be examined. What will be looked upon is mainly ”hot updates” for embedded systems, to create a picture of how such procedures generally are performed for embedded systems and how they perform in terms of the properties we are interested in measuring. Existing update methods will be examined, also looking at what most likely would be used for updating in a normal setting for this system. As the container update method highly relies on the technology of the Docker platform, details about this software system become vital in designing the update procedure. Details about Docker technology are found in 2.4.1 Docker. The software of an embedded product may need an update for several possible reasons. Repairs when bugs need to be fixed, additional features might be added and the software might be upgraded or in need for continuous maintenance. The basis for an updatable system is the possibility to re-program the ROM after integrating it in its final environment. When an outside device can cause the processor to stop its normal operation and instead execute re-programming code, it is called in-system programming. In in-application programming, the application itself can re-program the on- Flash ROM [56]. However, a CPU with a single on-chip flash cannot execute code in the flash array while simultaneously accessing and modifying the memory contents. This can be circumvented by copying the re-programming instructions into the RAM, and be flashed by the CPU from there. If the RAM is big enough, the entire updated program can be flashed at once, otherwise it might be possible to do this in subsystems one by one.

2.5.1 Dynamic Software Updating

As the container update method in this thesis aims to perform a kind of ”dynamic” or ”hot” update, it places itself in the same context as some of the existing DSU methods. Therefore some of these will be investigated. Many Dynamic Updating solutions are tied to specific compilers which can patch programs on the fly. But, as Docker utilizes images and the modularity of containers preferably should be used, this is not the type of updating that is of interest in this thesis. Hayden, Saur, Smith, Hicks, Foster [57, 58] has developed Kitsune, a framework for dynamic software updating in C, this method looks at threads and updates by a C specific regime. Smith, Hicks and Foster [59] looks at C specific DSU solutions where patches are made during execution of a program by looking at threads. This impose problems with multiple thread programs, which some of these DSU systems has solved. One method, tightly coupled with scheduling, is described by Seifzadeh et al. [60]. They define a ”dummy task” which performs the update activities. The ”dummy task” is placed in every hyper-period, doing nothing unless called, then it adds a new version of a task, removes the old version and checks to see if the new set of tasks is schedulable. The new task cannot however be executed until the next hyper-period, as this makes it possible for the new task to have a different period, creating a new hyper-period. For the ”dummy task” to appear in every hyper- period, its periodicity must be set to the same as the hyper-period, and by that it also has the greatest possible periodicity of the task set. By making the dummy task lowest priority, e.g. in this case using rate monotonic scheduling, the task is ensured to not interfere with the normal tasks as it executes as late as possible in idle time. Another job for the dummy task is to find and update all the points in the program, here called ”dependent points”, that call the task, redirecting them to the new version. Seifzadeh et al. describes the tasks of the dummy task as: 1. Setting the system state to updating

22 2. Calculating the hyper-period of the new set of tasks 3. Adding a dummy task with this period 4. Checking if the task set is schedulable, otherwise rejecting the update request 5. Loading the new task version into memory and store its memory address 6. Collecting the dependent points of the old version and updating them so that they direct to the new memory address The authors finds that the critical step considering time is the I/O-operation of loading the new version of the updated task into memory. It practically constitutes almost the total execution time. For real-time systems, this DSU might be a valid option. The update regime when using Docker for the implementations considered in this thesis, however, will be tied to the modularity of the Docker solution. The isolation of the containers can be used in order to run containers simultaneously. In a multi-version dynamic updating regime described by Chen, Qiang, Jin, Zou and Wang [61], the process is forked, run simultaneously, updated and synchronized, and then swapped to take over for the old process. This can be tied to Docker, where one container in a network of containers needs to be updated, and so the new container is run simultaneously as the old one before taking over the execution. A similar solution is described by Buisson [62]. Here, the difference between an active and passive program monitoring is presented, where the program either is told when to update or asks if it is time to be updated. The regime is similar to that of Chen et al. where creation, destruction, linking and unlinking programs ends with a ”hotswap”, of which the program has mandate over the process. For version controlling, some propose an event driven update regime [63], where authentication is made in order to have version control over the updates. The update regime is tightly coupled to the implementation, the common points all DSU regimes comes back to are • Update points - Places in the execution of a program where the program checks for updates. • State transfer - the state of the process is saved and a new state is constructed in the updated program. • Hotswap - the point in which the old program (microservice) stops its communication to other programs (microservices) and peripherals, and the new (updated) program (mi- croservice) takes over the communication.

2.5.2 Microservices Principle

The benefits of using microservices can be described as ”rather than architecting monolithic applications, one can achieve a plethora of benefits by creating many independent services that work together in concert”, as stated by Amaral et al. [64]. Microservices can provide the ability to scale and update services in isolation, to write them in different languages and use different middleware and data tiers. Microservices can however also introduce computational overhead for an application running in different processes that depends on network communication instead of just function calls. Amarel et al. [64] finds that when utilizing a container ensemble for services two approaches exist, named ”master-slave” or ”nested containers”. The master-slave solution

23 has one container; the master, coordinating the other containers running on the host; the slaves. The master tracks the slaves and help their communication but is a regular container running in parallel with them. For nested containers, child containers are created within a privileged parent container, limiting them to the parent’s boundaries and guaranteeing they all share the same resources and fate. They can add infrastructure management flexibility and easier workload deployment. They may however introduce overhead as there are two layers of a Docker daemon, and as initializing Docker in the parent, including loading an image, and creating a child takes some considerable amount of time. The master-slave approach is said to represent the regular container approach, while the nested containers are called microservices. Krylovskiy, Jahn and Patti [65] describe a microservice architecture in an Smart City Internet of Things Platform, or the problems and solutions in creating such an architecture for this purpose. They describe the problems with such a platform as needs of scalability, evolvability, and maintainability, where microservices seem to excel. Microservices are described as have been used in building distributed web applications. The authors recognize that there is no formal definition of a microservice architecture, but describes it as ”an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms. These services are small, highly decoupled and focus on doing a small task.” They list some key characteristics of the microservice architecture and the benefits these characteristics often give: • Technology heterogeneity - since the architecture does not implement a master over other services, the same model of the world, or the same data storage, different technologies can be used by different components in the system. • Resilience and ease of deployment - since all microservices have clear boundaries, they are isolated and can be updated and deployed independently. • Scaling with microservices - the microservice architecture can be scaled. • Organizational alignment - the organization can with ease be built around business capa- bilities. • Composability - system capabilities can be created by adding or re-using existing microser- vices. Stubbs, Moreira and Dooley [66], have developed a tool, Serfnode. Serfnode was created for service discovery in a distributed system that utilizes microservices. Microservice architecture is described as ”architecture where systems are composed from many smaller, loosely coupled services”. The authors recognize that there is no clear definition of microservices, they use the definition that ”a microservice is an separate process that uses messages to communicate with other parts of system”[sic]. The contrast to microservices is a monolithic architecture, where (almost) everything is one process. This has problems with scaling and modularity, where microservices have the upper hand. For the creation of microservices, the authors use Docker containers, as they recognize the facilitation Docker provides when creating, deploying and maintaining application or services within containers. A tool for service discovery was created in order to facilitate communication between service consumers and service providers in the microservice architecture. It adds three functionalities that the authors have found to be needed in order to have a functioning distributed system of microservices in containers: • Cluster membership - the containers belong to a cluster of containers so that all members are reachable by one member at defined addresses

24 • Monitoring - processes monitors by a supervisor • Event handling - different events that need to be handled defined by the user, cluster or supervisor There is no master node in this architecture. This paper illustrates the further need that arise for practical implementation; limitations in hardware, communication and the common requirement in cyber-physical systems for tasks to be conducted in a specific order. The tool, Serfnode, was created for Agave, which gives tools and APIs to create gateways. The authors also recognize that Docker also has additional tools that may fulfill these functionalities (Docker Swarm and Docker Compose). They conclude that ”distributed systems of microservices can overcome shortcomings of traditional monolithic architectures by allowing independence of development, deployment, updating and scaling of components [...]”.

25 26 3 Implementation

In this section the motivation, reasoning and implementation of the updating regime is described.

3.1 Update Regimes with Containers

For this thesis, the updating will be made on an application level, motivated by the fact that if firmware is to be updated, there would be some hardware changes demanding time offline and not demanding uptime that is the major concern for this thesis. The update process relies on the switching of two containers containing the old and the new application to be updated. To exchange two containers, the basic sequence could be concluded to be as follows: 1. An application runs inside a container generated from a Docker image. 2. A new image is downloaded, containing the updated application. 3. A container is run from the new image, alongside the first container. 4. The new container’s application is verified to see that it runs correctly. 5. Hardware access and communication with other units are reassigned to the new container. 6. The first container is deleted. This means that before any exchange takes place, the new image is pulled down to the host and the new container can start with no hurry. The critical moment is when these containers will exchange the control of the interfaces and possibly also the current state. To manage this exchange, two different container-service-architectures are proposed. They are drawn from the principles of container ensembles and microservice clusters described in 2.5.2 Microservices Principle as the two most suitable solutions for the embedded system implementation. The first consist of having a ”mother container” running alongside the container with the old application and the new application to be updated. It runs in parallel but has control over all the other containers. To make the software more easy and fast to update, all GPIO and interface code is placed in a separate container, taking variables from an application container as input. If no new I/O’s or ports are used, this container needs no update. If new should be added, these can be placed in a new GPIO-purpose container parallel to the existing. To update values such as a PWM duty cycle only the application container which contains these values needs to be updated. The mother container takes the update command from the user and makes sure that the old application container, referred to as ”App 1” in Figure 3 sends an (eventual) state to the new application container, ”App 2”, ceases the control of the GPIO-interface container and shuts down. It also redirects the access to the GPIO-interface container to App 2 and starts its inner application. The second alternative is to let App 1 and App 2 perform the process themselves, as can be seen in Figure 4. This approach is the same as the former, except for the mother container. The user downloads App 2 and starts the container so that it is up and running. App 2 makes contact and introduces itself to App 1, which then starts communicating with App 2. It checks if it is ready, if affirmative App 1 sends its state to App 2 and ceases the control of the GPIO’s.

27 Figure 3: Update regime 1.

App 2 receives the state, takes control of the GPIO’s and starts executing its program from the eventual state.

Figure 4: Update regime 2.

The reason to use these kinds of strategies is to create a semaphore-like control of the GPIO’s and interfaces, ensuring that the old and new software will not collide in their access. It is also to enable a fast and seamless switch of applications, following the notion of uptime in RQ 2 (see 1.2 Objectives).

28 3.2 Design Guidelines

The update regime will take advantage of the container characteristics as well as the microservice specific characteristics connected to using Docker, see 2.5 Remote Update. The major guidelines for the update regime are: • Scalability - The architecture of microservices can be rescaled by updating a container • Speed - The updating process will be done when the system is online, utilizing dynamic methods to sustain functionality • Self-propelled - The user need only start the new container, the update regime is then executed automatically • General - The updating regime can be used for systems of different complexity (See also Table 2) • Self-tested - The updating container tests itself before taking over execution

1 2 3 4 5 6 7 8 Multiple States 0 0 0 0 1 1 1 1 Memory Usage 0 1 0 1 0 0 1 1 Network Flexibility 0 0 1 1 0 1 0 1

Table 2: Complexity combinations for updating.

Safety and security aspects will not be designed for, however room will be left in order to implement these within the regime. The complexity that the regime can be asked to handle can be seen in Table 2, in which three things are taken into account. Firstly, whether the program has multiple states; this is typical for a program that behaves differently depending on input. An example of a multiple state program can be seen in Figure 5 where an automata is shown. Here the program is in State 1 if variable X is zero or negative and in State 2 if X is positive. Inside these states different versions of a task can be run or constants used in one task can be different (like for example different PID controllers for different velocities). Microservices are in many ways defined by only having one task, but since there is no clear definition of microservices, and splitting a program in different microservices depending on state can be cumbersome or unpractical, the authors of this thesis see that one microservice can have multiple states.

Figure 5: An automata showing a typical multiple state microservice.

Complexity can also be added by whether the program uses some kind of dynamic memory or not. If there is a counter or a variable being calculated, the value of that variable should be transferred so that the outputs depending on this calculation are correct. Finally, complexity

29 can be added by the microservice being updated with another microservice which has more or fewer nodes within the microservice network to communicate with. This demands flexibility of the regime. In conclusion, these guidelines gives the explicit demands that the microservices (containers) has to be built with updating in mind, being able to communicate its state and the memory contents. It must also have multiple entry points if it is divided into different states, this meaning that in Figure 5 the entry point should not only be in State 1 but also State 2 if needed to.

3.3 Overview of the Chosen Updating Regime

Taking into account flexibility and scalability, the choice was made to use update regime 2 (see Figure 4). In this regime, the containers are self-controlled and are not relying on a third container for orchestration. This regime is more flexible, which can be proven when looking at what happens when two different networks of containers (microservices) are to be connected or one network is to be split up into several; in that case the mother container has to be created or removed. This contradicts the idea of loosely coupled nodes within the microservice network. An overview of the update regime can be seen in Figure 6. To the left in the picture is Old Application, which is the application container to be updated and to the right is New Application which is the updated container. This application communicates with two other containers which can be seen in the top of the figure.

Figure 6: Overview of the updating regime.

The regime consists of eleven steps: Step 0: Old application container is up and running, communicating with two other con- tainers (Com) and has a built in preparation for updating (Update logic). Step 1: New application container is started Step 2: New application container sends a message to Old application container asking for data needed to testrun itself. The IP-address of the application container to be updated is either given as an input when starting the container or searched for in a given network.

30 Step 3: Old application container sends data describing itself to New application container. This information may contain: (a) Which state the application is in (b) An address to the memory places used (c) The IP-addresses of the containers it is connected to Step 4: Old application container sends the IP-address of New container to the containers it is connected to. They in turn connects to New application container. Step 5: New application container starts listening for a connection from the network of other containers. Step 6: When connected, New application container starts its application; recreates the same state and reads in variables from memory. It proceeds with normal execution, however, in the message it sends to the other containers or to the shared memory, it communicates that it has not the right to write into memory. Step 7: A built-in test-code checks the container status so that it executes in a desired manner, if not, it stops the updating regime. Step 8: If the test of New application container is passed as OK the message to the other containers are altered, communicating that it has the right to write. They in turn now use this new data instead of the old and sends a command to Old application container, telling it to shut itself off. Step 9: Old application container receives the command to shut itself off from the network of containers, and continues to do so. Step 10: Cleanup of the old container is either executed manually or a service is used.

3.4 Detailed Model

A detailed model of the updating regime was found to be needed because of its complexity, but also for being able to generalize over the regime. The model was made in UML (Unified Modeling Language), using the tool Enterprise Architect. The model in its entirety can be found in Appendix C, the most distinctive parts will be presented in this section. Before the updating regime starts the platform specific container controlling the peripherals, the GPIO container, is up and running as well as the container currently running the application, the Old Container (see Figure 7 for the top level of the model). The container that is to update the current container, the New Container, has not yet been started.

31 Figure 7: Model of the update regime, top level.

Both application containers are structured the same, including one update client, one server for communication with the network and one server specifically for updating, see Figure 8 for a model of New container. When the container that is to update the current container, New Container, is started it immediately starts its update client, see Figure 9 for a model of the update client. It connects to Old container and get the container information needed to start its application. The servers are started after this, and the GPIO container (and other containers if existed) connects to the server. The information that New container has been started was passed from Old container to GPIO container which immediately tries to connect to it. The application is then started, see Figure 10, where the application is internally tested. This function could also be monitored by the engineer, waiting for a user input to give the go ahead for taking over the functionality. When this test function has given the go ahead, the message to the GPIO container, which so far has read in the message but not used the data it contains, is altered giving over the functionality to New container. The GPIO container then send a ”kill-command”, a message to let Old container know that it is not being used anymore, letting it shut itself down. After the test function has given the go ahead to the New container, its own update server starts to listen. This ensures that this container can be updated as well, since it is not started until the update regime is finished.

32 Figure 8: Model of the update regime, New container.

33 Figure 9: Model of the update regime, New containers update client.

Figure 10: Model of the update regime, New containers application.

3.5 Implementation Setup

In this section, the hardware and container solution is described, on which the updating regime is built.

3.5.1 Platform Specifics

For the experiments, the hardware used is utilized in a way deemed typical for the implementa- tions regarded in this thesis, see 1.3 Scope. The hardware used is Raspberry Pi 2, model B. It has a 900MHz quad-core ARM Cortex-A7 CPU and 1 GB RAM. It also has peripherals in the

34 form of 4 USB ports, 40 GPIO pins, HDMI port, Ethernet port and a micro SD card slot fitted with a 8 GB, Ultra High Speed (UHS-I) SD card. The OS and Docker implementation for the ARM platform was taken form Hypriot [67], with their own image containing a Debian Wheezy Black-Pearl OS and Docker built specifically for ARM.

3.5.2 Docker

The Docker installation used is one specifically implemented for Raspberry Pi 2, created by Hypriot on top of the Debian OS [67]. Two Dockerfiles were created, one for the application containers and one for the GPIO container. The application containers have a Debian Wheezy base image (see Appendix D.1). When building the container, Docker copies the compiled code into the container and the default command is to open bash (the command prompt). After the image has been built, the container is started by the command:

1 docker run −−net=isolated n w −−ip=172.18.0.2 −i t d −−name=o l d container application i m a g e

for the old application container. Assigning a name of choice to the containers simplifies the managing of them. The command assigns the container to a local user-defined bridge network isolated nw with an IP-address. The sub-network was created with

1 docker network create −d b r i d g e −−subnet 172.18.0.0/16 isolated n w

for easy and isolated communication between the containers. By creating a user-defined bridge network it is possible to freely dictate what IP-addresses the containers should have, simplifying the implementation of the communication. The network is local to the host and therefore isolates the containers from external networks. Many proxy-settings were needed to be set due to the fact that both the system, the Docker daemon and the built container need these settings. For the container, the setting is made in the Dockerfile. The GPIO Dockerfile was taken from a Github repository and altered for adding the code used as well as adding an editor (vim) for convenience. It adds a lot of Python utilities that can be used with WiringPi, but is not used in this implementation. The running of this container differs from the application containers, due to fact that it has to be run privileged, meaning it has all rights on the hardware, which is needed to use the peripherals. It also sets two device flags enabling the container to write to memory or make settings inside the (/dev/ttyAMA0), as well as system and kernel memory and system ports (/dev/mem), directly, by running:

1 docker run −−device /dev/mem:/dev/mem −−d e v i c e /dev/ttyAMA0 : / dev/ttyAMA0 −−p r i v i l e g e d −− name=wiringpi −ti gpio /bin/bash

The network configurations was also added in this command.

3.5.3 Development Environment

Docker is officially only supported for 64-bit architectures, and this is the case both for the daemon and the repository. So in order to build the Docker daemon, images etc., a 32-bit environment is needed since the experiment platform is of 32-bit architecture. The development can be done directly in the hardware or in a VM on a PC. Since the user experience will be more or less the same, this will be done directly on the hardware.

35 3.6 Container Implementation

The Docker solution is applied to an embedded system which performs specific tasks in accor- dance with what a system within the scope of this thesis may be required to perform. Within the scope (1.3 Scope), the functions in this application may include • general purpose I/O (GPIO) • timers • PWM • I2C communication • SMPP communication (Short Message Peer-to-Peer) • hardware interrupts and timer interrupts • analog I/O (ADC) • Internet connection and perhaps more. The requirements on the hardware used in this thesis were that it must have a Wi-Fi or Ethernet connection and a general purpose I/O (GPIO) port (see 2.4.3 Docker Compatible Hardware). These requirements relate to the Internet of Things, where the GPIO is deemed to be the smallest requirement that can be put on an embedded system as it in some way must be connected to an actuator or sensor. Actuators, especially motors, are often controlled by a PWM signal, therefore such a signal will be generated. This signal will be measured to see the end-effect an update can have on connected actuators. It will have the frequency of 50 Hz, which is a common lowest frequency for being able to control a servo. A servo frequency was chosen because a servo may change its rotation drastically if the PWM signal is not maintained. Unlike a regular motor that keeps momentum or its state until new input is given, missing an input may rotate the servo. Timers and interrupts are common functionalities within embedded systems, however not for the specified scope, and will therefore not be utilized. This kind of functionality might be better suited for a in connection to a Raspberry Pi. An analog I/O will not be used either, as it is not a built in solution on the Raspberry Pi board. For the CPU specific tasks, there are a number of things that may be required. There are benchmark tools to measure the more generic ones, see 2.4.2 Related Work. These tasks include float point operations, read/write memory, integer calculation and so on. The CPUs can perform all of these operations. An FPGA, GPU or DSP can be connected to perform other operations or to accelerate the same operations, but will not be utilized as they are not standard within the scope. For the experiments conducted within the scope of this thesis, the specific performance in these regards are not of interest.

3.6.1 Hardware Specific Container

For the code in the hardware specific container, called the GPIO container, WiringPi [68] is used for generating a hardware PWM and general GPIO implementations. The hardware PWM is generated on pin 12 (see Figure 11 for the layout), corresponding to WiringPi’s pin 1. For the output pins that can be set by the applications in the containers, pin 2 through 6 can be set,

36 this corresponds to the blue-marked pin in Figure 11 (13,15,16,18,22). The pins that are read by the GPIO container have a pull-up on them and are WiringPi pin number 21 through 29, marked orange in Figure 11 (29,31,32,33,35,36,37,38,40). Pin number 7 (WiringPi and physical) is used for experiment and measuring purposes. The container also contains a client sending and receiving messages, and internal handling for transforming the messages, which are strings, into booleans and arrays. It also transforms the in-pin values (values of the pins that are read by the GPIO container) into a string to send to the application containers. See Appendix D.2 for code.

Figure 11: Pin layout of the Raspberry Pi 2.

3.6.2 Application Containers

The code inside the application containers, called Old container and New container, does not contain peripheral handling. Inside these containers the application is programmed. They, as well as the GPIO container, have message-handling code to transform and split strings. However, the significant part of the code inside these containers are communication to other containers. It has at least 3 threads: one for the application, and two others for servers needed for communication. One of the servers is used for the communication with the network of containers (including GPIO), the other server is used specifically for updating. It listens on a specific updating port, waiting for an updated application to connect to it. The threads are implemented by including the pthread header and compiling with the -lpthread option. The servers and application functions are then created as thread functions. This way they can run in parallel with no disturbances or delays from other parts of the code. The adherence to the design guidelines (3.2 Design Guidelines) are listed below:

• Scalability - The Old container sends the IP-addresses of the container network it is con- nected to. Not all of these IP-addresses have to connect to the New container, this is up to New container. In order for new containers to connect to the updated container, it is required that they are updated, since it is a client that needs to connect. There is however no restriction for a container not to have a client, meaning that if the connection can be managed by a server, this is possible without updating again.

• Speed - The updating process is seamless, meaning that the GPIO operations are taken over without stopping and starting a new container. The two application containers are run together and one does not take over the other without being tested.

• Self-propelled - The user need only start the new container, and the update regime is then executed automatically by the containers. There can be an option for checking the container before it takes over, this is however not necessary.

• General - The updating regime can be used for systems of different complexity described in Table 2.

37 • Self-tested - The updating container tests itself before taking over execution, for the im- plemented code this test only returns true and tests nothing. The code of the application containers can be found in Appendix D.3.

3.6.3 Container Communication

The containers in this implementation communicates with the TCP/IP communication protocol. It was chosen for its simple implementation and usage, but primarily for the high reliability of the connection. If a connection is established, messages that are sent are practically guaranteed to arrive at their destination. This was considered to be the number one priority when choosing communication protocol. TCP communication is realized by creating a server and a client program to manage the connection. These were implemented through C socket programming, where servers and clients were created as functions in the containers’ programs.

Each server uses the function getaddrinfo to set up a struct of address info which includes assigning the host’s IP-address to the socket structures and the port the server will be listening on. The server then uses the socket, bind and listen system calls to create a socket, to bind to a certain port to the socket descriptor and to start listening for incoming connections. The server is then ready to accept incoming connections through the listening socket descriptor. Pending connections are queued up and then assigned a new socket file descriptor while the first is still listening for new incoming connections. The servers in this implementation however are not expected to handle any more than one single connection. When the connection from a client is accepted the server uses send and recv to send and receive messages containing the information that is to be transmitted between the containers. The update client and the GPIO container’s client functions operate in a similar way. They also use getaddrinfo, but in this case with the information about the server it will connect to, including the server’s IP-address and port. The client opens up a socket and tries to connect to the server port. Then it is able to use send and recv to handle the message communication. The messages sent between the containers are the triggers to propel the updating regime forward. The information sent to the GPIO from the application containers, giving the behavior of the system, can be seen in Table 3. The returning message, describing the state of the GPIO container and its inputs can be seen in Table 4. The information sent to the GPIO container includes the IP-address of the application container, the duty cycle the PWM should be set to, as well as the output pins (out pins) to be set either high or low (out values). The message also include fields specifically for the updating process; ready access and new IP. The variable new IP is normally zero, but when a container (Old) gets a message that it is to be updated, the IP-address of the new container (New) is written to new IP, which is then given to the GPIO in order for GPIO to connect to New container. The variable ready access is a variable indicating if the container sending the message has the right to induce outputs from the GPIO container. If this boolean is set to ”false”, the message is read in, but the data is not used in the functions.

IP / redy access / new IP / duty / out pins / out values string boolean string integer array array

Table 3: Message from Old to GPIO and New to GPIO. First row is the message format with variable name, second row is the type of the variable.

38 The message that is sent from the GPIO to the application containers can be seen in Table 4. This message contains information about the in-pins (inval str) that are pre-defined, it also contains a string of error-messages if any occur in the container. The final variable ”die” is a boolean that normally is set as false, but when a new container (New) has taken over the operation of a container (Old), ”die” is set as true when sent to the old container (Old) to let this container know that it should be shut down, or ”die”.

error / inval str / die string array boolean

Table 4: Message from GPIO to New and Old. First row is the message format with variable name, second row is the type of the variable.

The message sent between the update server in one application container (Old) to an update client in another container (New) contains information about the old container (Old) that this new container (New) might need. The information given in the implemented regime can be seen in Table 5, it contains the inval str given from the GPIO container, here named pin in. The message also gives the IP-addresses of all the containers it is connected to (IP network), mostly for verification purposes. In the default case, Old container should give the IP-address of its descendant to the network of containers it is connected to. The authors however see that there can be other cases when the new container connects to a container in the network, due to the need of network flexibility, described in 3.2 Design Guidelines. In the IP network variable, the memory address of a shared memory can also be given. The variable ”state” can be an integer or an array describing the state the application is in, but could also be a variable describing other settings to start up the application.

pin in / IP network / state array string integer or array

Table 5: Message from the update server to the update client, in this case, from Old to New. First row is the message format with variable name, second row is the type of the variable.

39 40 4 Experiment Design

In order to investigate research question RQ 2 and RQ 3 presented in 1.2 Objectives, experi- ments are conducted. The first research question, RQ 2, concerns uptime performance during update. The transition between Old application container and New application container has an update guideline regarding seamlessness, see 3.2 Design Guidelines. To verify this, and to give a measure of the disturbance induced by updating with the designed update regime, the period time of the PWM is measured at the transition. Seamlessness is desired due to the fact that disturbances in the duty cycle of a PWM for a servo motor might result in unwanted movement. The third research question, RQ 3, instead looks at changes in power consumption of the embed- ded system related to the update. The products in the IoT-startup market (see 1.2 Objectives) might not be optimized for low power consumption but may however be running on batteries, therefore it is interesting to see how the current demands during this update regime differs for different limitations. The product may also have limitations on the heat it may dissipate per time unit, therefore this measurement is of interest. The difference between the power consump- tion when running three instead of two containers are not of interest in this thesis, due to the fact that the update solution is meant to be general (3.2 Design Guidelines) and the containers have different needs of energy depending on their content. The point of interest is to determine a model of the relationship between limitations set on the containers and the research questions.

4.1 Experiment Method

The experiments are designed with help of the e-Handbook of Statistical Methods [7]. The experimental design begins with describing the process model as a ”black box”, with in this case two input factors (independent variables) - CPU and memory - that can be controlled and varied during the experiments. These can be controlled by the fact that they can be limited when starting containers. These input factors invoke some measurable output responses, in this case current and time (dependent variables). The objective chosen for this experiment is to make a model describing the relationship between the dependent and independent variables. The experiments are divided into different parts: EX 1: Run one settings-level many times to see if it has a normal distribution EX 2: Run 2-factor, full factorial 2-level experiment with center points EX 3: Run 2-factor, full factorial 3-level experiment if the 2-level experiment indicates a non- linear model To test for a normal distribution, the update regime is run 60 times and then plotted in a probability plot. The 2-factor, full factorial 2-level experiment with center points is made in order to show a connection between the limitation of the CPU and memory utilization and the uptime and current levels of the system. The 2-level experiment can only give a linear relationship between the dependent and independent factors, therefore center points are used in order to detect if the relationship is of higher order. The 2-factor, full factorial 3-level experiment is only run if the 2-level indicates a non- linear model of the relationship. This will give a second order relationship. There could be

41 indications of even higher order relationships, however, lower order relationships are usually good enough to predict a behavior [7].

4.1.1 EX 1: Normal Distribution Check

In order to make a model of the relationship between the CPU and memory limitations imposed on the containers and the energy and time it takes to update, the distribution of the output measurements has to be normal (Gaussian) distributed. In order to prove that they indeed are normally distributed, the updating regime is run 60 times, and measurements are then plotted.

4.1.2 EX 2: 2-factor, Full Factorial 2-level Experiment with Center Points

A full factorial design demands that all combinations of settings are executed. In a 2-level factorial, the levels, 1 and -1, correspond to a high and a low value. These values are set by test-running the updating process and finding reasonable values. In the Engineering Statistics Handbook [7], the author recommends the engineer to ”be bold, but not foolish, in choosing the low and high factor levels”. The levels set can be seen in Table 6.

High Low CPU 32 1024 Memory 32 1000

Table 6: High and low values chosen for the experiment.

Beside all possible combinations of high and low values, center points are also checked. This is to indicate a non-linear relationship between the dependent and independent variables. These are set at the midpoint between the high and low values. The order of the experiment runs (combinations) are randomized, except for the center points, which are to be run regularly throughout the experiment runs. The order can be seen in Table 7. If a higher order relationship is seen, a similar 3-level experiment will be conducted, see 4.1.3 EX 3: 2-factor, Full Factorial 3-level Experiment.

CPU Memory 1 0 0 2 1 1 3 -1 1 4 0 0 5 -1 -1 6 1 -1 7 0 0

Table 7: Order of the 2-factor, full factorial 2-level experiment runs with cen- ter points. The high values are denoted as 1 and low as -1, the 0 corresponds to the center point checking.

42 4.1.3 EX 3: 2-factor, Full Factorial 3-level Experiment

The 3-level experiment is run similarly to the 2-level experiment, the difference being that there are 3 different settings to the CPU and memory limitations, denoted 0, 1 and 2. The lowest value corresponds to level 0 and so forth. The order of the experiment runs are randomized, see Table 8. The relationship could be of higher order, however, a second degree relationship is deemed to portrait the relationship adequately.

CPU Memory 1 2 1 2 1 1 3 1 2 4 2 2 5 1 0 6 0 1 7 0 2 8 0 0 9 2 0

Table 8: Order of the 2-factor, full factorial 3-level experiment runs.

4.2 Analysis

The analysis is made in Python, utilizing statistical libraries. The output is saved in a CSV-files and then read in to a Python program. Most of the functions are taken from the scipy library. The current measurements are filtered by a 6-polynomial Butterworth lowpass filter to smooth the output and disregard disturbances.

4.3 Measurements

From an embedded systems point of view in general, and a hardware point of view in particular, it is interesting to examine the impact of the update from an outside perspective. This concerns e.g. if the sensors and actuators stay connected and receive the expected signals at expected time and if the power consumption is affected. The measurements during the updating regime are made between the time the GPIO container receives a new IP-address to connect to, until this container has taken over the execution of the application. The preferred way to measure this would be to start when the New container is started, however, this is not externally measurable, since this container is not privileged and therefore unable to set a pin for this purpose. For measuring off platform metrics a logic analyzer and an oscilloscope is used, see Figure 12 for the connections.

4.3.1 Actuator Signals

Finding a way to accurately monitor the time it takes to actually switch between two containers handling a job, including switching access to peripherals, starting the application and beginning to send (signals/pulses or communication messages) can be assumed to be a complex task. A

43 Figure 12: Schematic overview of hardware connections. simple and feasible solution to get an estimation of this time and its consequences is to measure ”outside”, from the peripherals point of view. The platform will control the actuator by sending signals in the form of PWM-signals to it according to the set frequency. In order to decide if the system can perform its tasks during the update (see RQ 2 (1.2 Objectives)), the PWM signal sent from the platform to the actuator will be monitored. This measurement will be made in order to see if this complies with the required frequency of these signals or if a latency is introduced because of the updating process. Measurements are made using a Saleae Logic 8 Analyzer that records both digital and analog signals and displays them in a software interface on the user’s computer, see Figure 12. Here it is possible to navigate through data, make measurements such as frequency, duty cycle, pulse width, period and RMS voltage with adjustable sample rate. The signal data can then be saved and exported to different file formats [69]. The logic analyzer is used to measure the PWM output from the Raspberry Pi as well as the update-pin that indicates when the actual replacement process starts and finishes. This signal is used to trigger and measure over the significant time interval, and does also display the time it takes to do what is defined as the actual update. The signals are sampled at a 10 MHz digital sample rate.

4.3.2 Changes in Current

To measure the average current during the update an Agilent Technologies Infiniium 1GHz- 20GSa/s oscilloscope was used, see Figure 12. The measurement was made in voltage over two 1.5 ohm resistors inserted between the power supply and the Raspberry Pi. The resistors were placed in the ground wire along with a capacitor of 22 µF between the ground and VCC wire in order to create an even current to the Raspberry Pi. As the voltage across the resistors was very low, a TL072 general-purpose operational amplifier was used to amplify the signal ten times before being read by the oscilloscope. The oscilloscope utilized the update-signal as trigger to capture a measurement of the power consumption. Both the voltage across the resistors as well as the update signal were then logged in comma separated value files (CSV-files) for each experiment run in order to identify the relevant time interval from the update signal and match it against the resistor voltage to extract the data of interest from the measurement.

44 5 Result

Results from the experiments are given in this section, as well as results of additional analysis made due to the results of the experiments.

5.1 Uptime

The measurements made on the period time of the PWM are used to determine a model of how uptime disturbances due to updating relates to the limitations made on the containers, this experiment is conducted to investigate the second research question RQ 2, see 1.2 Objectives.

5.1.1 EX 1: Normal Distribution Check

The period of the PWM was measured during updating and an anomaly was detected when the pulse duration was changed due to the update. This can be seen for the second normality distribution check run in Figure 13. The anomaly is a elongation of the duty cycle when changing the pulse width of the PWM, which is set by the application and changed when changing application container from Old to New (i.e. updating). The distribution of this period time can be seen in Figure 14 when running the update sequence 60 times. It is a very tight distribution, the difference is 0.1 µs, which is the measurable difference. This tight distribution indicates that a factorial experiment can be made on this anomaly.

45 Figure 13: The anomaly found when updating can be seen in the difference of the period time, see the bottom plot. The anomaly is indicated by stars to show that this anomaly happens during the update, see the first plot, and when the duty cycle is changed, see the second plot.

46 Figure 14: Histogram of the period times during updating.

5.1.2 EX 2: 2-factor, Full Factorial 2-level Experiment with Center Points

CPU Mem Time Period 1 0 0 0.0214372 2 1 1 0.0214372 3 -1 1 0.0214371 4 0 0 0.0214372 5 -1 -1 0.0214372 6 1 -1 0.0214371 7 0 0 0.0214370

Table 9: Order of the 2-factor, full factorial 2-level experiment runs with cen- ter points. The high values are denoted as 1 and low as -1, the 0 corresponds to the center point checking. The results are in SI-units.

The results over the time period of the changing of duty cycle can be seen in Table 9. The results are all within the very tight distribution that can be seen in Figure 14, except for number 7. This experiment run differ from the distribution by 0.1 µs. The sampling is done with 0.1 µs, so no conclusion can be made about this difference, especially since it occurred at a center point check. Therefore, the 3-level experiment will not be conducted.

47 5.1.3 Comparison to Behavior Without Container

The anomaly in the period time that is seen when the pulse duration is changed due to the update gives a constant value inside the container. This value is compared to the behavior when changing pulse duration outside a container to determine if this is a container bound anomaly or a anomaly connected to the PWM peripheral or the operating system. Measurements for different changes in pulse width outside the container can be seen in Figure 15, which shows that there is an anomaly outside the container. A number of 60 runs are made for changing the duty cycle to the same value as in the updating regime. The results can be seen in Figure 16, showing that the behavior outside the container is similar and in some case the period is elongated with 0.1 µs more than inside the container.

Figure 15: Changes in period time of the PWM due to changing pulse width.

The period time of the PWM is also compared to the update regime. The results can be seen in Figure 17 where it is clear that no difference can be seen.

48 Figure 16: Histogram comparing the period time at the switching of pulsewidth, in- and outside a container.

Figure 17: Histograms for comparison of the period time in-and outside a container.

49 5.2 Current Measurements

The current measurement is made to determine a model of how the current average during the update relates to the limitations made on the containers, this experiment is conducted to investigate the final research question RQ 3, see 1.2 Objectives.

5.2.1 EX 1: Normal Distribution Check

The means of the current during updating was plotted for proving normal distribution, see Figure 18. It shows that the distribution is normal and that there is no trend over the time the samples was taken (last plot). This indicates that a factorial experiment can be done. The mean of the means is 0.33736 A.

5.2.2 EX 2: 2-factor, Full Factorial 2-level Experiment with Center Points

The result of the experiment can be seen in Table 10.

CPU Mem Current Mean 1 0 0 0.34034 2 1 1 0.34299 3 -1 1 0.33717 4 0 0 0.33598 5 -1 -1 0.33892 6 1 -1 0.33989 7 0 0 0.33190

Table 10: Order of the 2-factor, full factorial 2-level experiment runs with center points. The high values are denoted as 1 and low as -1, the 0 corre- sponds to the center point checking. The results are in SI-units.

For the current the result is plotted and compared against the results of the normal distribution check, see Figure 19. From this plot it is clear that no relationship can be found between the settings and the current. The distributions are also plotted together in Figure 20, showing that the distributions are comparable. Because no relationship could be found, the 3-level experiment will not be conducted for the current.

50 Figure 18: Distribution plots of the means of the current during updating.

51 Figure 19: Distribution plots of the means of the current during updating for different settings.

52 Figure 20: Distribution plots of the means of the current during updating for different settings.

53 54 6 Conclusion

Here follows conclusions that can be made with regards to the research questions posed in 1.2 Objectives. The first research question, RQ 1, is answered not with experiments but with a logical reasoning in 2 Prestudy. The kind of hardware capabilities that is needed to utilize Docker containers, specifically memory and CPU, is determined by whether the hardware can run a distribution of Linux with the ability to run Docker. Docker in its turn is very lightweight, meaning that the capabilities are not significantly higher than just running Linux. The capabilities will most likely rather be determined by the application that is to be run inside Docker. The second research question, RQ 2, is answered by conducting experiments, the results of which can be seen in 5.1 Uptime. The normal distribution check (5.1.1 EX 1: Normal Distribu- tion Check) indicates an anomaly when changing the pulse width of the PWM. The 2-level full factorial experiment 5.1.2 EX 2: 2-factor, Full Factorial 2-level Experiment with Center Points showed that this anomaly is constant no matter how the container’s CPU time and memory uti- lization is limited. A quantitative analysis shows that this behavior is not connected to running the software in a container (5.1.3 Comparison to Behavior Without Container). The comparison of time in- and outside of a container showed no difference. This behavior is therefore connected to the the behavior of the OS or the peripheral not the updating regime or the containerization. The uptime performance is not affected by an update of a soft-deadline functionality software in an embedded system utilizing containers as microservices. The third research question, RQ 3, is also answered by conducting experiments, the results can be seen in 5.2 Current Measurements. The normal distribution check (5.2.1 EX 1: Normal Distribution Check) gave a normal distribution of the current measurements, indicating that a 2-level full factorial experiment could be conducted. The 2-level full factorial experiment, 5.2.2 EX 2: 2-factor, Full Factorial 2-level Experiment with Center Points, shows results within the distribution no matter how the container’s CPU time and memory utilization are limited. No measurable changes in power consumption of the embedded system related to the update exist.

55 56 7 Discussion

The implementation this thesis presents is by no means a complete product ready to be used in the IoT-startup field, but rather a proof of concept that using Docker containers for updating applications in embedded systems is promising as method. When the product needs an altered application, while still needing to keep the uptime for its tasks, this update can be performed accordingly. In an ever-changing world were embedded products constantly needs to stay up-to- date with its environment, updating dynamically becomes a mission of great importance. This study hopes to illustrate a fraction of the possibilities in this area.

7.1 Result

In this thesis, it is concluded from the experiments that no disturbances in uptime could be dis- cerned because of the implemented update regime, and neither because of the containerization. Operating-system-level virtualization is very lightweight, and the results indicate that it is so lightweight that sensors and actuators could not, from the outside, give any indication of the containerization. For the current, the settings of the containers did not give any changes. This result may have been due to the fact that ARM processors have high power efficiency. Uptime and power consumption are most likely affected by increasing the load by great means, which the update process do not do. This is probably more dependent on the kind of application that is implemented and, most probably, on the hardware platform this is run on. Tests with the intended application and hardware therefore needs to be conducted before concluding the suitability of Docker and the updating regime.

7.2 Update Regime and Model

The model in this thesis meant to represent the most general representation of the updating process, and does not have extended fault handling. However, this must of course be added for a real implementation. In the regime, the start-up of the servers and clients are expected to be faultless. In case of inability to start these tasks, the regime must be stopped, either inside the container or outside by an orchestrating tool. This can also be done by hand. This inability will not make any permanent alteration to the old container, so this error is confined to the new container and removed as it is shut down. Other handling that should be implemented is the testing; this test function or functions should be connected to the actual application and check the timing constraints, so that the state is coherent with the old container (in case multiple states exist, and synchronization of states is needed). Run-time errors and logic errors should already be tested for before launch, but it is reasonable to expect the testing function to handle these kinds of errors to some extent as well. The level of testing will also give an addition to the size of the system, meaning that, as for all security, there will be a trade-off between size of the container, or image, and the magnitude of the security. The testing should also test for the plausibility of the data that is received, and have handling for receiving data outside the allowed interval. The update regime is designed with greater complexity in mind. Table 2 in 3.2 Design Guidelines shows different levels of complexity; connection to additional memory and network flexibility can both be handled within the update regime designed, however this not implemented. In a

57 more complex setting, the GPIO and application container might be part of a bigger network of microservices where the application container keeps continuous contact with a number of other microservice nodes. In this case, their IP-addresses should be passed from Old container to New so that New can make sure that they all are connected before indicating that it is ready to take over operation. The GPIO container might be connected to multiple application containers as well, which demands handling in this container for access to the different peripherals. Further complexity might include that the application container includes multiple states that must be transferred and resumed by the new application container. This enables a more seamless ex- change for more complex applications. This also has a greater importance in hard real-time systems, where a very precise and quick switch of software is needed and where the transition of states plays a bigger role. To realize this, the software can be prepared with certain update points where it is prepared to hand over, and the new software can be equipped with corre- sponding points where the program can be resumed, this in order to synchronize the containers. Further synchronization schemes can also be implemented as it may be beneficial to synchronize before the hand over, where communication delays can be calculated and taken into account before the switch.

To create a truly robust updating process, rollback must be enabled throughout the entire process. The naive way to replace something would be to remove the old and put in the new as fast as possible, but if this takes any considerable amount of time, what happens if a problem occurs in between? This creates a need for: a) a program that checks itself and does not proceed if corrupt or malfunctioning, and b) that the program can roll back to a safe state when errors occurs. The update regime this thesis presents was designed with this in mind. It is also built around the principle of letting the containers handle the updates themselves, ensuring that all participants are ready for every action. By going for a microservice principle and placing the hardware control in a separate, privileged container, the problem of hardware read/write- collision or a switching-”gap” in time is smoothly avoided. Of course the approach introduces some restrictions on how different the updated software can be in its structure and content. The application needs to be fitted into the application thread and if its contact with other nodes should be extended, this has to be implemented in possibly new server/client functions. The point is that for the software to be able to self-handedly and seamlessly replace the old one, both the old software and the new software must be prepared for this exchange and structured in a compatible way (see 2.5 Remote Update).

Stubbs, Moreira and Dooleys tool, Serfnode [66], illustrate well the further needs that arise when recognizing that an ideal microservice architecture is hard or impossible to create (see 2.5.2 Microservices Principle). Limitations in hardware does make the different microservices dependent in the way that they may need to use computation time at the same time, if they are on the same hardware. Cyber-physical systems often have a real-time aspect (either with soft or hard deadlines), where tasks must be conducted within a timespan. If the microservices are completely paralleled, which they often will not be because of the communication with each other or the sharing of hardware, these kinds of deadlines can be fulfilled without scheduling or re-scheduling. In distributed systems, there are also the added complexity when different services are not directly dependent on each other to conduct their work, however, the propriety of the executed task without correct input from other services might disappear. In these cases, the validity of using microservices must be evaluated thoroughly. A microservice architecture puts emphasis on the division between the different services and the independence between tasks. Some distributed systems instead look more towards integration of different programs or computing units in order for them to get a closely bonded relationship, this for safety-, security-

58 or scheduling reasons (or all of them).

7.3 Implementation

Docker is not officially supported for implementation on 32-bit architectures, but it has still been used in this work. It is not too bold to expect that 32-bits architectures soon will be included in what Docker supports as many implementations already exist. This means, however, that there are no guarantees that this Docker implementation is fully tested and will work for all kinds of containers, applications and hardware setups. This thesis show that this updating method can be one of the right directions for developing software updating techniques for soft- deadline functionalities in embedded systems. However, extensive further experimentation and investigation needs to be done on this topic to make a clear conclusion regarding its suitability, especially concerning different hardware environments. The choice of communication protocol was based on the need for a simple Docker-friendly implementation and confidence in the arrival of messages. Both TCP and UDP were deliberated where TCP were seen as advantageous in its open data stream and assurance of arrival of sent messages. UDP on the other hand were considered for its very fast communication and the simplicity in just sending off messages without establishing a stream. Since the update regime highly relies on swiftness to be successful the possible speed of the UDP delivery were seen as a great advantage. The choice in the end, however, fell upon the TCP protocol because missing a message such as the ”update-go” would be disastrous for the whole update process as it currently is designed. Thought were however put into evolving the update regime by replacing only the TCP communication to and from the GPIO container with the UDP protocol. This communication is going on constantly and at a fast pace, and most of the time with the same data in every message. For the critical moment of exchange, when new shall become old, the communication could be developed to check for message delivery and retry until positive, so that these important messages surely arrives.

7.4 Method

The quantitative method used gave clear indication of the performance regarding the posed research questions, however, this thesis is not enough to determine if the regime and Docker is suitable for updating purposes. Engineers and knowledgeable people in the area should also be consulted to get the more qualitative aspects of the suitability. For this thesis, and the scope posed, the quantitative method gave a clear indication. The internal validity was assured by having a experimental design that is described in literature. The external validity was motivated by using a hardware that was deemed to be representative, as well as having a flexible and general updating regime. Absolute numbers was not of interest, but rather relationships, changes and differences.

7.5 Security

The matter of security is decided as out of the scope for this thesis, but should by all means be considered before implementation. Security is becoming more and more important in the Internet of Things, so that an update method definitively benefits from a high grade of security.

59 Implementing encrypted sockets and secure web proxys should therefore be considered. Security regarding containers is tightly connected to the container isolation. Here some thought should be put into what isolation can be expected of containers, as they do not possess the same level of isolation as e.g. full-level virtualization (see 2.3 Taxonomy of Virtualization). This is one of the trade-offs for their light-weight nature. As mentioned in the prestudy (2.3 Taxonomy of Virtualization) there are solutions to being able to use Docker containers inside other types of virtualization solutions, such as Virtuozzo Containers, as they claim more security than Docker do. Docker can also be used in virtual machines, adding another layer of virtualization and consequently, another layer of security. All of these additional layers of virtualizations may, however, make the solution less light-weight. The consequences on utilizing the peripherals must once more be tested to assure uptime, as well as the additional power consumption that may occur. Docker themselves suggest hardening solutions of the operating system to add security [37]. Hardening solutions are easiest described as solutions where the attack surface, meaning points where an attacker may try to enter or extract data from the system, is reduced. They also point out additional settings that can be made in the Docker Engine in order to map a privileged user (the root user) to a non-privileged user outside the container. For the implementation made in this thesis, the engineer or user is the one to instigate the update, while for other embedded systems there are update points (see 2.5 Remote Update) in the system. These are points when the application looks for updates of itself. This self- instigation of the updating may also be another way to reduce the attack surface, as the updating functionality can be turned off before the updating point is reached. This may, however, have effect on the speed of the update.

60 8 Future Work

This work could be continued in several ways. To complete the update, automating the pulling, building, starting, stopping and removal of containers should be added with the aid of scripts. The update regime could be extended to higher complexity by fully incorporating the handling, synchronizing and transferring of different states, but also by managing an extended number of connections to other containers in the network. To fully investigate the effects of the update, the testing peripherals could be broaden to contain for example I2C or SMPP. To explore the robustness of the update regime, experiments could be conducted which induces typical disturbances of various kinds, even power-downs, to test the update regime’s ability to rollback or persist. Updating multiple containers simultaneously is something that could be implemented. This could be done by extending the updating regime. Having nested containers will most likely introduce some considerable overhead (2.5.2 Microservices Principle) so as the system the con- tainers are run upon is embedded, light-weight was considered of high importance and therefore this approach was not attempted. The switch of containers could naturally also be done by stopping the old and immediately following starting the new application, but this was estimated to produce an uptime disturbance too big to go unnoticed and was also considered to be too risky. This could however be investigated further as well. To further be able to draw conclusions regarding the suitability of implementing Docker contain- ers for this purpose on embedded systems, implementation and experiments should be performed on other kinds of hardware, in particular other kinds of CPU architectures. The authors propose the Intel Edison kit of x86 IA-32 architecture as a natural next hardware to implement on, as it falls into the hardware options of interest listed in 2.4.3 Docker Compatible Hardware and is another type of CPU architecture that can be used for this implementation. The containers’ images are built from base images of Debian Wheezy and Jessie, adding a few utilities. When resources are limited or there are high demands on light-weight implementations, containers could as a suggestion be slimmed down to be even more basic and tailored to only contain what is absolutely needed to run their applications. The same can be said about the Linux-based operating system, which can also be slimmed down or built according to the exact requirements. To formally fulfill the design guideline of speed (3.2 Design Guidelines), there should also be a handling if the regime takes longer than the required updating time to finish. The next step is to perhaps also implement the updating regime as being a task with a hard deadline. Even though this thesis points towards independence and isolation of the applications as they are being updated, if there are harder restrictions of the hardware or the applications are very heavy, having a deadline for the update may be a valid requirement. This would require some sort of additional handling, either inside or outside of the container. The solution could be a container orchestration tool such as Kubernetes or a timer inside the container. If a real-time patched Linux-kernel could be used to orchestrate containers or not requires further investigation and most likely further patches of the kernel. As both Kanuparthi, Karri and Addepalli [13] and Konieczek et al. [14] described in the prestudy, the Internet of Things in many ways include cyber-physical systems (see 2.3 Taxonomy of Virtu- alization). In this thesis, the market area considered has not included hard real-time, however, many embedded systems have hard deadline requirements, such as motor control. The Linux-

61 kernel is not a real-time system architecture per se, but it has been used as such. There are patches that can be made to the Linux-kernel in order for it to become preemptive and so a real-time system, or at least close to one. The authors of this thesis will not argue how hard the real-time of a real-time implementation of a Linux system is. However, because it is used for real-time, arguably, having a Linux implementation of the update regime as presented in this thesis makes it possible to have a real-time implementation of the updating regime as well. This must be further investigated for a definite conclusion of the matter.

62 9 Bibliographies

[1] B. C. Howard. (2013, Aug.) How the ”Internet of Things” May Change the World. National Geographic. [Accessed 5-Feb-2016]. [Online]. Available: http://news.nationalgeographic. com/news/2013/08/130830-internet-of-things-technology-rfid-chips-smart/ [2] L. Coetzee and J. Eksteen, “The internet of things - promise for the future?: An introduction,” IST-Africa Conference proceedings, 2011, pp. 1–9, May 2011. [Online]. Available: http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6096323 [3] L. Tan and N. Wang, “Future internet: The internet of things,” ser. Advanced Computer Theory and Engineering (ICACTE), 2010 3rd International Conference on, vol. 5, pp. V5–376. [Online]. Available: http://ieeexplore.ieee.org.focus.lib.kth.se/xpl/articleDetails. jsp?arnumber=5579543 [4] R. Schneiderman, “Internet of things/m2m??-??a (standards) work in progress,” Modern Standardization: Case Studies at the Crossroads of Technology, Economics, and Politics, pp. 288–288, 2015. [Online]. Available: http://ieeexplore.ieee.org/xpl/articleDetails.jsp? arnumber=7111638 [5] R. Conradi and A. I. Wang, Empirical methods and studies in software engineering: expe- riences from ESERNET. Springer, 2003, vol. 2765. [6] J. W. Creswell, Research design: Qualitative, quantitative, and mixed methods approaches, 4th ed. Sage publications, 2014. [7] NIST/SEMATECH. (2012) e-handbook of statistical methods. NIST. [Accessed 21-Mar- 2016]. [Online]. Available: http://www.itl.nist.gov/div898/handbook/ [8] R. Daws. (2016, Feb.) Iot devices may be used to spy on you, claims us intelligence chief. IoT Tech News. [Accessed 18-Feb-2016]. [Online]. Available: http://www.iottechnews. com/news/2016/feb/10/iot-devices-may-be-used-spy-you-claims-us-intelligence-chief/ [9] P. Marwedel, Embedded Systems Design. Springer, 2006. [10] E. A. Lee and S. A. Seshia, Introduction to embedded systems: A cyber-physical systems approach, 2nd ed. LeeSeshia.org, 2015. [11] L. Thiele and E. Wandeler, “Performance analysis of distributed embedded systems,” in Embedded Systems Handbook, R. Zurawski, Ed. CRC Press, 2005. [12] E. Cavalcante, M. P. Alves, T. Batista, F. C. Delicato, and P. F. Pires, “An analysis of reference architectures for the internet of things,” in Proceedings of the 1st International Workshop on Exploring Component-based Techniques for Constructing Reference Architectures, ser. CobRA ’15, 2015, pp. 13–16. [Online]. Available: http://doi.acm.org.focus.lib.kth.se/10.1145/2755567.2755569 [13] A. Kanuparthi, R. Karri, and S. Addepalli, “Hardware and embedded security in the context of internet of things,” in Proceedings of the 2013 ACM Workshop on Security, Privacy & Dependability for Cyber Vehicles, ser. CyCAR ’13. New York, NY, USA: ACM, 2013, pp. 61–64. [Online]. Available: http://doi.acm.org.focus.lib.kth.se/10.1145/2517968.2517976

63 [14] B. Konieczek, M. Rethfeldt, F. Golatowski, and D. Timmermann, “Real-time communication for the internet of things using jcoap,” ser. Proceeding of the 2015 18th International Symposium on Real-Time Distributed Computing. IEEE, pp. 134–141. [Online]. Available: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153799 [15] J. Sahoo, S. Mohapatra, and R. Lath, “Virtualization: A survey on concepts, taxonomy and associated security issues,” ser. 2010 Second International Conference on Computer and Network Technology. IEEE, pp. 222–226. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5474503 [16] G. J. Popek and R. P. Goldberg, “Formal requirements for virtualizable third generation architectures,” Communications of the ACM, vol. 17, no. 7, pp. 412–421, 1974. [17] F. Rodr´ıguez-Haro, F. Freitag, L. Navarro, E. Hern´anchez-s´anchez, N. Far´ıas- Mendoza, A. J. Guerrero-Ib´a˜nez,and A. Gonz´alez-Potes, “A summary of virtualization techniques,” Procedia Technology, vol. 3, Jan 2012. [Online]. Available: http: //dx.doi.org/10.1016/j.protcy.2012.03.029 [18] T. Shimada, T. Yashiro, N. Koshizuka, and K. Sakamura, “A real-time hypervisor for embedded systems with hardware virtualization support,” ser. TRON Symposium (TRON- SHOW), 2015, Dec 2014, pp. 1–7. [19] VMware. Vmware thinapp. VMware, Inc. [Accessed 17-Feb-2016]. [Online]. Available: http://www.vmware.com/products/thinapp [20] Spoon. Spoon container engine. Spoon, Inc. [Accessed 3-Mar-2016]. [Online]. Available: https://spoon.net/ [21] Unix. (2015) The unix system. The Open Group 1995-2015. [Accessed 4-Mar-2016]. [Online]. Available: http://www.unix.org/ [22] ubuntu. (2015, Sep.) Basicchroot. ubuntu documentation. [Accessed 3-Mar-2016]. [Online]. Available: https://help.ubuntu.com/community/BasicChroot [23] W. Toomey. (2010) The unix tree. [Accessed 4-Mar-2016]. [Online]. Available: http://minnie.tuhs.org/cgi-bin/utree.pl [24] ArchWiki. zystemd-nspawn. ArchWiki. [Accessed 4-Mar-2016]. [Online]. Available: https://wiki.archlinux.org/index.php/Systemd-nspawn [25] Linux. The linux kernel archives. Linux Kernel Organization, Inc. [Accessed 4-Mar-2016]. [Online]. Available: https://www.kernel.org/ [26] Wikipedia. Operating-system-level virtualization. Wikimedia Foundation, Inc. [Accessed 17-Feb-2016]. [Online]. Available: https://en.wikipedia.org/wiki/Operating-system-level virtualization [27] Archlinux. (2015, Dec.) Linux containers. archlinux Wiki. [Accessed 16-Feb-2016]. [Online]. Available: https://wiki.archlinux.org/index.php [28] LinuxContainers.org. Linux containers. Canonical. [Accessed 17-Feb-2016]. [Online]. Available: https://linuxcontainers.org/ [29] Linux-VServer.org. Linux vserver. [Accessed 17-Feb-2016]. [Online]. Available: http: //linux-vserver.org/Welcome to Linux-VServer.org

64 [30] Virtuozzo. (2015) Parallels IP Holdings GmbH. [Accessed 3-Mar-2016]. [Online]. Available: http://www.virtuozzo.com/ [31] openvz.org. Openvz virtuozzo containers. [Accessed 17-Feb-2016]. [Online]. Available: https://openvz.org/Main Page [32] lmctfy. Let me contain that for you. Google Inc. [Accessed 17-Feb-2016]. [Online]. Available: https://github.com/google/lmctfy [33] Google. (2015, Nov.) What is google container engine? Google Cloud Platform. [Accessed 4-Mar-2016]. [Online]. Available: https://cloud.google.com/container-engine/docs/ [34] V. Marmol and R. Jnagal, “Let me contain that for you!” ser. Linux Plumbers Conference. [Online]. Available: http://www.linuxplumbersconf.org/2013/ocw/proposals/1239 [35] appc. (2015, Dec.) App container. appc. [Accessed 4-Mar-2016]. [Online]. Available: https://github.com/appc/spec [36] rkt. (2016, Feb.) rkt - app container runtime. rkt. [Accessed 4-Mar-2016]. [Online]. Available: https://github.com/coreos/rkt [37] Docker. Docker website. Docker Inc. [Accessed 16-Feb-2016]. [Online]. Available: https://www.docker.com/ [38] S. Hykes. (2014, Mar.) Docker 0.9: introducing execution drivers and libcontainer. Docker Blog. [Accessed 4-Mar-2016]. [Online]. Available: https://blog.docker.com/2014/ 03/docker-0-9-introducing-execution-drivers-and-libcontainer/ [39] B. Golob. (2015, Jun.) Docker and broad industry coalition unite to create open container project. Docker Blog. [Accessed 4-Mar-2016]. [Online]. Available: https://blog.docker.com/2015/06/open-container-project-foundation/ [40] OpenContainers. (2016) The open container initiative. Open Container Initiative a Linux Foundation Collaborative Project. [Accessed 4-Mar-2016]. [Online]. Available: https://www.opencontainers.org/ [41] A. Polvi. (2015, Jun.) App container and the open container project. Core OS Blog. [Accessed 4-Mar-2016]. [Online]. Available: https://coreos.com/blog/ app-container-and-the-open-container-project/ [42] ——. (2014, Dec.) Coreos is building a container runtime rkt. Core OS Blog. [Accessed 4-Mar-2016]. [Online]. Available: https://coreos.com/blog/rocket/ [43] runC. runc. Open Container Initiative. [Accessed 4-Mar-2016]. [Online]. Available: https://github.com/opencontainers/runc [44] C. Anderson, “Docker [software engineering],” IEEE Software, vol. 32, May 2015. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7093032 [45] B. DeHamer. (2014, Jul.) Optimizing docker images. CenturyLink Blog. [Ac- cessed 11-Mar-2016]. [Online]. Available: https://www.ctl.io/developers/blog/post/ optimizing-docker-images/ [46] M. Raho, A. Spyridakis, M. Paolino, and D. Raho, “Kvm, xen and docker: A performance analysis for arm based nfv and cloud computing,” ser. Information, Electronic and Electrical Engineering (AIEEE), 2015 IEEE 3rd Workshop on Advances in, pp.

65 1–8. [Online]. Available: http://ieeexplore.ieee.org.focus.lib.kth.se/xpl/mostRecentIssue. jsp?punumber=7361819 [47] W. Felter, A. Ferreira, R. Rajamony, and J. Rubio, “An updated performance comparison of virtual machines and linux containers,” IBM Research Report, July 2014. [Online]. Available: http://domino.research.ibm.com/library/cyberdig.nsf/papers/ 0929052195DD819C85257D2300681E7B [48] G. M. Xavier, V. M. Neves, D. F. Rossi, C. T. Ferreto, T. Lange, and F. C. A. De Rose, “Performance evaluation of container-based virtualization for high performance computing environments,” ser. 2013 21st Euromicro International Conference on Parallel, Distributed, and Network-Based Processing. IEEE, pp. 233–240. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6498558 [49] M. A. Joy, “Performance comparison between linux containers and virtual machines,” Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, pp. 342–346, Mar 2015. [Online]. Available: http://ieeexplore.ieee.org.focus. lib.kth.se/xpl/mostRecentIssue.jsp?punumber=7153311 [50] Docker-Slim. docker-slim: Lean and mean docker containers. [Accessed 17-Feb-2016]. [Online]. Available: https://github.com/cloudimmunity/docker-slim#description [51] eLinux. (2016) Embedded linux wiki. [Accessed 19-Feb-2016]. [Online]. Available: http://elinux.org/Main Page [52] A. Herzog. (2015, Oct.) Raspberry pi dockercon challenge: We have a winner! Docker Blog. [Accessed 5-Mar-2016]. [Online]. Available: https://blog.docker.com/2015/ 10/raspberry-pi-dockercon-challenge-winner/ [53] Wikipedia. Single-board computer. Wikimedia Foundation, Inc. [Accessed 17-Feb-2016]. [Online]. Available: https://en.wikipedia.org/wiki/Single-board computer [54] Canonical. Ubuntu homepage. Canonical Ltd. [Accessed 8-Mar-2016]. [Online]. Available: http://www.ubuntu.com [55] E. Brown. (2015, Jun.) Top 10 linux and android hacker sbcs of 2015. Linux.com. [Ac- cessed 10-Mar-2016]. [Online]. Available: https://www.linux.com/news/embedded-mobile/ mobile-linux/834861-top-10-linux-and-android-based-hacker-sbcs-of-2015 [56] X. Chen, D. Zhang, and H. Yang, “Research on key technologies of on-line programming in embedded system,” ser. 2009 Third International Symposium on Intelligent Information Technology Application. IEEEIEEE, pp. 45–48. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5370419 [57] M. C. Hayden, K. E. Smith, M. Hicks, and S. J. Foster, “State transfer for clear and efficient runtime updates,” ser. 2011 IEEE 27th International Conference on Data Engineering Workshops. IEEEIEEE, pp. 179–184. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5767632 [58] M. C. Hayden, K. Saur, K. E. Smith, M. Hicks, and S. J. Foster, “Kitsune: Efficient, general-purpose dynamic software updating for c,” ACM Transactions on Programming Languages and Systems, vol. 36, Oct 2014. [Online]. Available: http://dl.acm.org/citation.cfm?doid=2684821.2629460

66 [59] K. E. Smith, M. Hicks, and S. J. Foster, “Towards standardized benchmarks for dynamic software updating systems,” ser. 2012 4th International Workshop on Hot Topics in Software Upgrades (HotSWUp). IEEEIEEE, pp. 11–15. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6226609 [60] H. Seifzadeh, P. A. A. Kazem, M. Kargahi, and A. Movaghar, “A method for dynamic software updating in real-time systems,” ser. 2009 Eighth IEEE/ACIS International Conference on Computer and Information Science. IEEEIEEE, pp. 34–38. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5223140 [61] F. Chen, W. Qiang, H. Jin, D. Zou, and D. Wang, “Multi-version execution for the dynamic updating of cloud applications,” ser. Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, vol. 2, pp. 185–190. [Online]. Available: http://ieeexplore.ieee.org.focus.lib.kth.se/xpl/articleDetails.jsp?arnumber=7273617 [62] J. Buisson, E. Calvacante, F. Dagnat, E. Leroux, and S. Martinez, “Coqcots & pycots: non-stopping components for safe dynamic reconfiguration,” ser. Proceedings of the 17th international ACM Sigsoft symposium on Component-based software engineering - CBSE ’14. ACM PressACM Press, pp. 85–90. [Online]. Available: http://dl.acm.org/citation.cfm?doid=2602458.2602459 [63] S. An, X. Ma, C. Cao, P. Yu, and C. Xu, “An event-based formal framework for dynamic software update,” ser. Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, pp. 173–182. [Online]. Available: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7272929 [64] M. Amaral, J. Polo, D. Carrera, I. Mohomed, M. Unuvar, and M. Steinder, “Performance evaluation of microservices architectures using containers,” ser. Network Computing and Applications (NCA), 2015 IEEE 14th International Symposium on, pp. 27–34. [Online]. Available: http://ieeexplore.ieee.org.focus.lib.kth.se/xpl/articleDetails. jsp?arnumber=7371699 [65] A. Krylovskiy, M. Jahn, and E. Patti, “Designing a smart city internet of things platform with microservice architecture,” ser. Future Internet of Things and Cloud (FiCloud), 2015 3rd International Conference on, pp. 25–30. [Online]. Available: http://ieeexplore.ieee.org.focus.lib.kth.se/xpl/articleDetails.jsp?arnumber=7300793 [66] J. Stubbs, W. Moreira, and R. Dooley, “Distributed systems of microservices using docker and serfnode,” ser. Science Gateways (IWSG), 2015 7th International Workshop on, pp. 34–39. [Online]. Available: http://ieeexplore.ieee.org.focus.lib.kth.se/xpl/mostRecentIssue. jsp?punumber=7217893 [67] G. Fichtner. (2016) Docker pirates armed with explosive stuff - roaming the seven seas in search for golden container plunder. Hypriot. [Accessed 16-Feb-2016]. [Online]. Available: http://blog.hypriot.com/ [68] G. Henderson. Wiring pi gpio interface library for the raspberry pi. [Accessed 10-May-2016]. [Online]. Available: http://wiringpi.com/ [69] Saleae. Saleae homepage. Saleae Inc. [Accessed 21-Mar-2016]. [Online]. Available: https://www.saleae.com/

67 68 A Work Division

Table 11 shows the work division and Table 12 shows the contribution to the report.

Description Sandra Elin Prestudy Docker Containers Update Related work System parameters Hardware Update Experiments Conduct Experiments Experiment Design Find levels for experiments Measurement tool setup Experiment analysis code Modelling Make state machine in Enterprise Architect Coding Communication Coding standard Threading Install OS and Docker Testing Update logic Docker containers and GPIO Code network setup Hardware Setup internet connections Make workstation Always Risk management Jira Git code repository setup Schedule and maintenence Git thesis repepository setup and maintenence Stakeholder Tritech KTH management

Table 11: An overview of the work division. This shows main contribution, smaller contributions from the other party can exist.

I Sandra Elin Introduction Objectives Method Scope Sustainability Prestudy Docker Definitions Target Market and Taxonomy of Virtualization Parameters of Interest Linux Containers Remote Update Related Work Microservices Principle Docker Compatible Hardware Implementation Update Regimes with Containers Design guidelines Docker Overview of the Chosen Container Communication Updating Regime Actuator Signals Detailed Model Platform Specifics Development environment Container Implementation Hardware Specific Container Application Containers Experiment design Measurements Experiment Method Analysis Result Result Conclusion Conclusion Future Work Future Work

Table 12: An overview of the individual contribution to the report, sections not included was collaborated on.

II B Hardware List

Name Version CPU/core Architecture Bit Rel. Core year Parallella ARM Cortex-A9 ARMv7-A 32 13 Dual-core armStoneTM A5 v1 ARM Cortex-A5 ARMv7-A 32 13 Single-core, v2 ARM Cortex-M4 Dual-core armStoneTM A9 v1 ARM Cortex-A9 ARMv7-A 32 12 Quad-core, v2 Single-core 5250-AA ARM Cortex-A15 ARMv7-A 32 12 Dual-core Arndale Octa 5420 ARM Cortex-A15, ARMv7-A 32 13 Quad-core ARM Cortex-A7 Axiomtek CAPA841 Intel Atom E3827 x86-64 64 13 Dual-core, or E3845 Quad-core Banana Pi ARM Cortex-A7 ARMv7-A 32 14 Dual-core Banana Pi M2 orginal, + ARM Cortex-A7 ARMv7-A 32 15 Quad-Core Banana Pi M3 ARM Cortex-A7 ARMv7-A 32 15 Octa-Core Banana Pro ”” ARM Cortex-A7 ARMv7-A 32 14 Dual-core Beagle board x15 ARM Cortex-A15 ARMv7-A, 16 Dual-core and ARM M4 ARMv7E-M BeagleBoard-xM C2 ARM Cortex-A8 ARMv7-A 32 10 Single-core BeagleBone A6A ARM Cortex-A8 ARMv7-A 32 11 Single-core BeagleBone Black C ARM Cortex-A8 ARMv7-A 32 14 Single-core Connected i.MX6 ARM Cortex-A9 ARMv7-A 32 Cosmic+ Board ARM Cortex-A5 ARMv7-A, 32 13 Single-core ARM Cortex-M4 ARMv7E-M Cubieboard ARM Cortex-A8 ARMv7-A 32 12 Single-core Cubieboard 2 ARM Cortex-A7 ARMv7-A 32 13 Dual-core Cubieboard 3 ARM Cortex-A7 ARMv7-A 32 13 Dual-core Cubieboard 4 / CC-A80 Octo ARM Cortex-A15 ARMv7-A 32 14 Quad-core ARM Cortex-A7 Dragonboard 410c ARM Cortex-A53 ARMv8-A 64 15 Quad-core Embest SBC8600B ARM Cortex-A8 ARMv7-A 32 13 Single-core Firefly-RK3288 RK3288, ARM Cortex-A17 ARMv7-A 32 15 Quad-core Reload Forlinx OK210, ARM Cortex-A8 ARMv7-A 32 13 OK210-A Forlinx FL2416, ARM926EJ ARM 32 FL2440, TE2440 Forlinx OK6410 ARM 11 ARMv6 32 Forlinx I.MX6UL ARM Cortex-A7 ARMv7-A 32 Forlinx I.MX6DL, ARM Cortex-A9 ARMv7-A 32 I.MX6Q Gizmo Board x86-64 Bobcat x86-64 64 13 Gizmo Board 2 x86 AMD Embedded IA-32 32 14 Dual-core G-Series GoWarrior TIGER ARM Cortex-A9 ARMv7-A 32 15 Dual-core Hackberry A10 ARM Cortex-A8 ARMv7-A 32 12 Single-core HiKey Rev A1 ARM Cortex-A53 ARMv8-A 64 15 HummingBoard i1, ARM Cortex-A9 ARMv7-A 32 14 Single-core, i2, Dual-core lite, i2eX Dual-core Inforce 6410 Krait ARMv7-A, 32 13 Thumb-2 Inforce 6410plus Krait ARMv7-A, 32 15 Thumb-2 Inforce 6540 Krait 450 ARMv7-A 32 14 Intel Edison Kit v2 Intel Atom IA-32 32 14 Dual-core Intel Galileo Gen 2 x86 Quark IA-32 32 13 Single-core Inventami Entry ARM Cortex-A9 ARMv7-A 32 15 Dual-core Inventami Full ARM Cortex-A9 ARMv7-A 32 15 Quad-core MarsBoard A10 New ARM Cortex-A8 ARMv7-A 32 13 MarsBoard A20 New ARM Cortex-A7 ARMv7-A 32 13 Dual-core MarsBoard RK3066 ARM Cortex-A9 ARMv7-A 32 14 Dual-core MinnowBoard Intem Atom IA-32 32 13 Single-core x86 Bonnell MiraBox ARMADA 370 ARMv7 32 14 MK802 II ? ARM Cortex-A8 ARMv7-A 32 12 MK808 ? ARM Cortex-A9 ARMv7-A 32 12 MTB025 ? ARM Cortex-A8 ARMv7-A 32 13 MYIR MYD-AM335X ARM Cortex-A8 ARMv7-A 32 13 Single-core NanoPC-T1 ARM Cortex-A9 ARMv7-A 32 14 Quad-Core NanoPi 2 ARM Cortex-A9 ARMv7-A 32 15 Quad-Core Nitrogen6x Rev 3 ARM Cortex-A9 ARMv7-A 32 13 Novena ARM Cortex-A9 ARMv7-A 32 14 TK1 ARM Cortex-A15 ARMv7-A 32 14

Table 13: Overview of SBCs for implementations within the scope (A-N). The list is incompleete and no accuracy is claimed.

III ODROID-C1 ARM Cortex-A5 ARMv7-A 32 14 Quad-core ODROID-C1+ ARM Cortex-A5 ARMv7-A 32 15 Quad-core ODROID-C2 ARM Cortex-A53 ARMv8-A 64 16 Quad-core ODROID-U3 ARM Cortex-A9 ARMv7-A 32 14 ODROID-XU ARM Cortex-A15 ARMv7-A 32 13 ARM Cortex-A7 ODROID-XU3 ARM Cortex-A15 ARMv7-A 32 14 ARM Cortex-A7 ODROID-XU3 Lite ARM Cortex-A15 ARMv7-A 32 15 ARM Cortex-A7 ODROID-XU4 ARM Cortex-A15 ARMv7-A 32 15 Octa-core ARM Cortex-A7 OLinuXino A10 LIME ARM Cortex-A8 ARMv7-A 32 13 OLinuXino A13 WIFI ARM Cortex-A8 ARMv7-A 32 12 OLinuXino A20 LIME, ARM Cortex-A7 ARMv7-A 32 13 MICRO OLinuXino A20 LIME2 ARM Cortex-A7 ARMv7-A 32 14 Orange Pi ARM Cortex-A7 ARMv7-A 32 15 Orange Pi 2 ARM Cortex-A7 ARMv7-A 32 15 Quad-core Orange Pi Lite ARM Cortex-A7 ARMv7-A 32 16 Orange Pi Mini ARM Cortex-A7 ARMv7-A 32 15 Dual-core Orange Pi Mini 2 ARM Cortex-A7 ARMv7-A 32 15 Quad-core Orange Pi One ARM Cortex-A7 ARMv7-A 32 16 Quad-core Orange Pi PC ARM Cortex-A7 ARMv7-A 32 15 Quad-core Orange Pi Plus ARM Cortex-A7 ARMv7-A 32 15 Quad-core Orange Pi Plus 2 ARM Cortex-A7 ARMv7-A 32 15 Quad-core Orion R28 Pro ARM Cortex-A17 ARMv7-A 32 14 PandaBoard ES Rev. 3 ARM Cortex-A9 ARMv7-A 32 11 pcDuino Lite ARM Cortex-A8 ARMv7-A 32 13 Single-core pcDuino v2 ARM Cortex-A8 ARMv7-A 32 13 Single-core pcDuino3 ARM Cortex-A7 ARMv7-A 32 14 Dual-core pcDuino3Nano ARM Cortex-A7 ARMv7-A 32 14 phyBOARD-Mira Solo, ARM Cortex-A9 ARMv7-A 32 14 Single-core Quad Quad-core phyBOARD-Wega 5V, 12-24V ARM Cortex-A8 ARMv7-A 32 13 PINE A64 ARM Cortex-A53 ARMv8-A 64 15 Quad-core PINE A64+ ARM Cortex-A53 ARMv8-A 64 15 Quad-core Radxa Rock base, Lite ARM Cortex-A9 ARMv7-A 32 14 Raspberry Pi B, B+ ARM11 ARMv6 32 12,14 Single-core Raspberry Pi 2 Gen2 B ARM Cortex-A7 ARMv7-A 32 15 Quad-core Raspberry Pi 3 B ARM Cortex-A53 ARMv8-A 64 16 Quad-core RIoTboard ? ARM Cortex-A9 ARMv7-A 32 14 SBC-iBT Single, Baytrail x86-64 64 Single-core, Dual, Dual-core, Quad Quad-core SBC-iGT G Series x86-64 64 Dual-core SBC-ISB x86 Intel Ivy IA-32 32 Single-core Bridge Core-i7 Snowball SKY-S9500 ARM Cortex-A9 ARMv7-A 32 11 Supermicro E100-8Q x86 Quark IA-32 32 14 Single-core TBS 2910 Matrix ARM Cortex-A9 ARMv7-A 32 14 Quad-core Tronsmart Draco Octo Meta, ARM Cortex-A15 ARMv7-A 32 Quad-core Octo Telos ARM Cortex-A7 UDOO Dual, ARM Cortex-A9 ARMv7-A 32 13 Dual-core, Quad Quad-core Ventana GW5510 Femto Single, ARM Cortex-A9 ARMv7-A 32 14 Single-core, Dual, Dual-core, Quad Quad-core VersaLogic Iguana EPIC-25 Intel Atom D525 x86-64 64 Single-core, Dual-core or D425 VersaLogic Newt EPIC-17 DMP Vortex86DX IA-32 32 Single-core VIA APC 8750 / Rock ARM1176JZF ARMv6 32 12 VIA EPIA P910 x86-64 VIA Eden X4 x86-64 64 12 Single-core VIA VAB-600 ARM Cortex-A9 ARMv7-A 32 13 Wandboard Solo, ARM Cortex-A9 ARMv7-A 32 13 Single-core, Dual, Dual-core, Quad Quad-core

Table 14: Overview of SBCs for implementations within the scope (O-W). The list is incompleete and no accuracy is claimed.

In Table 13 and 14, hardware that fit within the scope of this thesis and follow the requirements stated in 2.4.3 Docker Compatible Hardware are listed. The information was found by browsing the Internet looking at both sites that compile SBC lists and compare them, vendors of SBCs and homepages for SBC manufacturers. There are many more boards than the ones listed and the information, as the nature when taking it from the Internet, might not be accurate. It is, in spite of this, included as it may show trends.

IV C Model

V VI VII VIII IX X XI XII XIII XIV D Code

D.1 Docker Files

This Dockerfile is taken from https://github.com/acencini/rpi-python-serial-wiringpi.git The container is run in privelaged mode like: docker run –device /dev/mem:/dev/mem –device /dev/ttyAMA0:/dev/ttyAMA0 –privileged –name=wiringpi -ti ¡image name¿ /bin/bash

1 # Pull base image 2 FROM resin/rpi −raspbian: jessie 3 MAINTAINER Andrew Cencini 4 # edited by Elin MK Nordmark 5 6 #Proxy settings: 7 ENV http proxy=”http://emkno:xxx@proxy. tritech . se:8080/” 8 ENV https proxy=”https://emkno:xxx@proxy. tritech . se:8080/” 9 10 # Install dependencies 11 RUN apt−get update && apt−get i n s t a l l −y \ 12 g i t −c o r e \ 13 build −e s s e n t i a l \ 14 gcc \ 15 python \ 16 python−dev \ 17 python−pip \ 18 python−v i r t u a l e n v \ 19 vim\ 20 −−no−i n s t a l l −recommends && \ 21 rm −rf /var/lib/apt/lists/∗ 22 RUN pip install pyserial 23 RUN git clone git://git.drogon.net/wiringPi 24 RUN cd wiringPi && ./build 25 RUN pip install wiringpi2 26 27 # Define working directory 28 WORKDIR / data 29 VOLUME / data 30 31 # Copy over compiled data 32 COPY GPIOContainer /data 33 COPY gpiosetup.sh /data 34 COPY pwmsetup.sh /data 35 36 # Define default command 37 CMD [”bash”]

XV D.2 GPIO Container

1 ////////////////////////////////////////////////////////////////////////////////// 2 // GPIOContainer.c 3 // Written by Elin MK Nordmark and Sandra Aidanpaa 16 May 2016 4 // 5 // The client functions in this code are based on source code found in 6 // ”Beej’s Guide to Network Programming”, written by Brian ”Beej Jorgensen” Hall. 7 // 8 ///////////////////////////////////////////////////////////////////////////////// 9 10 /∗ 11 This is the code of the privelaged container in the updating process. 12 It has application functions and a client. 13 ∗/ 14 15 #i n c l u d e 16 #i n c l u d e 17 #i n c l u d e 18 #i n c l u d e 19 #i n c l u d e 20 #i n c l u d e 21 #i n c l u d e 22 #i n c l u d e 23 #i n c l u d e 24 #i n c l u d e 25 #i n c l u d e 26 27 #define DEBUG 1 // Debugging variable 28 #define COMMUNICATION 1 // Debugging variable 29 #define true 1 30 #define false 0 31 #define PORT ”3487” // the port client will be connecting to 32 #define MAXDATASIZE 100 // max number of bytes we can get at once 33 typedef int bool; 34 35 // The setup function: needs only to be run once. Calls on three bash files for the GPIO 36 // functionality. Takes no argument and returns 0. 37 void setup() { 38 wiringPiSetup() ; 39 system(”chmod +x gpiosetup.sh”); 40 system(”chmod +x pwmsetup.sh”) ; 41 system(”chmod +x setPWM.sh”) ; 42 system(”./gpiosetup.sh”); 43 system(”./pwmsetup.sh”) ; 44 printf(”gpio setup \n ”) ; 45 pinMode ( 7 , OUTPUT) ; 46 digitalWrite (7,0); 47 } 48 49 // Function to change the dutycycle of the PWM, calls on a bash script takes a string 50 // with a number to set the dutycycle and returns 0. 51 int setPWM(char duty[]) { 52 char gpiobash[50]; 53 sprintf(gpiobash , ”./setPWM.sh %s”, duty); 54 system(gpiobash); 55 r e t u r n 0 ; 56 } 57 58 // Function make an array with integers out of a string. Takes a string (string) 59 // with values to transform, an empty array (array) to put the values in, 60 // and the size of the array (size). Returns 0. 61 int str2array(char ∗ s t r i n g , i n t ∗ array, int size) { 62 char ∗ token ; 63 i n t i =0; 64 token = strtok(string ,”,”); //Delimiter is a space, change if other delimiter 65 while(token != NULL) { 66 array[i] = atoi(token);

XVI 67 i ++; 68 token = strtok(NULL,”,”); //Delimiter is a space, change if other delimiter 69 } 70 i f (DEBUG==2){ 71 printf(”str2array: array values: ”); 72 f o r ( i = 0 ; i < s i z e ; i ++) { 73 printf(”%d ”, array[i]); 74 } 75 p r i n t f (”\ n ”) ; 76 } 77 r e t u r n 0 ; 78 } 79 80 // Function to make a string out of an array of integers. Takes an array (array) 81 // with values to transform, an empty string (string) to put the values in, 82 // and the size of the array (size). Returns 0. 83 int array2str(int ∗ array , char ∗ string, int size) { 84 i n t i ; 85 char buf[16]; 86 snprintf(string , 50, ”%d”, array[0]); 87 i f ( s i z e >1){ 88 f o r ( i = 1 ; i < s i z e ; ++i ) { 89 snprintf(buf, 16, ”%d”, array[i]); 90 strcat(string , ”,”); 91 strcat(string , buf); 92 memset(buf ,0, strlen(buf)); 93 } 94 } 95 r e t u r n 0 ; 96 } 97 98 // Functon to set pins 2 through 6 to high or low. Takes a string of pins to set (pin s t r ) 99 // and a string of values to set the corresponding pins to (val str). Returns 0. 100 int setGPIO(char pin str[], char val s t r [ ] ) { 101 int i, outpin; 102 i n t p i n a r r a y [ 2 0 ] = { 0 } ; 103 i n t v a l a r r a y [ 2 0 ] = { 0 } ; 104 str2array(pin s t r , p in a r ra y , 1 5 ) ; 105 str2array(val s t r , v a l a r r a y , 1 5 ) ; 106 107 i f (DEBUG==2){ 108 printf(”str2array: pin array values: ”); 109 f o r ( i = 0 ; i < 1 5 ; i ++) { 110 printf(”%d ”, pin a r r a y [ i ] ) ; 111 } 112 p r i n t f (”\ n ”) ; 113 printf(”str2array: val array values: ”); 114 f o r ( i = 0 ; i < 1 5 ; i ++) { 115 printf(”%d ”, val a r r a y [ i ] ) ; 116 } 117 p r i n t f (”\ n ”) ; 118 } 119 120 f o r ( i = 0 ; i < 2 0 ; i ++){ 121 outpin=p i n a r r a y [ i ] ; 122 i f ( ( outpin > 1) && (outpin < 7) ) {//legal pin values 123 i f ( ( v a l array [ i]==1) | | ( v a l array [ i]==0)) { 124 pinMode (outpin , OUTPUT) ; 125 digitalWrite (outpin ,val a r r a y [ i ] ) ; 126 i f (DEBUG==1){ 127 printf(” Set pin %d to %d\n”,outpin ,val a r r a y [ i ] ) ; 128 } 129 } e l s e { 130 i f (DEBUG==3){ 131 printf(” Illegal value for pin %d \n”, outpin); 132 } 133 }

XVII 134 } e l s e { 135 i f (DEBUG==3){ 136 printf(” Illegal pin %d \n”, outpin); 137 } 138 } 139 } 140 r e t u r n 0 ; 141 } 142 143 // Function that reads the pins 21 through 29. Takes an empty string (inpin s t r ) and 144 // sets the corresponding value in it. Returns 0. 145 int readSensorValues(char ∗ i n p i n s t r ) { 146 i n t i ; 147 int inpins[9]= { 21,22,23,24,25,26,27,28,29 } ; 148 i n t i n p i n v a l u e s [ 9 ] = { 0 } ; 149 f o r ( i = 0 ; i < 9 ; ++i ) { 150 i n p i n values[i] = digitalRead(inpins[i]); 151 i f (DEBUG==2){ 152 printf(” Pin %d is %d\n”,inpins[i],inpin values[i] ); 153 } 154 } 155 array2str(inpin values ,inpin s t r , 9 ) ; 156 i f (DEBUG==1){ 157 printf( ” readSensorValues inpin s t r= %s \n ” , i n p i n s t r ) ; 158 } 159 160 r e t u r n 0 ; 161 } 162 163 // Function that splits the incoming message string (in) and saves them in string places f o r 164 // IP , r e a d y a c c e s s , new IP, duty, out pins , o u t values. Returns 0. 165 int msgSplit(char ∗ in , char ∗IP , char ∗ r e a d y access , char ∗new IP , char ∗duty , char ∗ out pins , char ∗ o u t v a l u e s ) { 166 strcpy(IP,strtok(in, ”/”)); 167 strcpy(ready access ,strtok(NULL, ”/”)); 168 s t r c p y ( new IP, strtok(NULL, ”/”)); 169 strcpy(duty, strtok(NULL, ”/”)); 170 s t r c p y ( out pins ,strtok(NULL, ”/”)); 171 s t r c p y ( o u t values ,strtok(NULL, ”/”)); 172 173 i f (DEBUG==2){ 174 printf(” IP: %s \n ” , IP ) ; 175 printf(” ready a c c e s s : %s \n ” , r e a d y a c c e s s ) ; 176 printf(” new IP : %s \n ” , new IP); 177 printf(” duty: %s \n ” , duty ) ; 178 printf(” out p i n s : %s \n ” , o u t p i n s ) ; 179 printf(” out v a l u e s : %s \n ” , o u t v a l u e s ) ; 180 } 181 182 r e t u r n 0 ; 183 } 184 185 // Function that unifies the outgoing message (out) from strings: error , pin i n and d i e . 186 int msgUnify(char ∗out , char ∗ e r r o r , char ∗ p i n i n , char ∗ d i e ) { 187 snprintf(out, 50, ”%s/%s/%s”,error , pin i n , d i e ) ; 188 i f (DEBUG==1){ 189 printf(” msgUnify: out= %s \n ” , out ) ; 190 } 191 r e t u r n 0 ; 192 } 193 194 // Function that turns strings of ”1” or ”0” into booleans. Takes a string (string) 195 // and returns a boolean value (value). 196 bool str2bool(char ∗ s t r i n g ) { 197 bool value ; 198 value=atoi(string); 199

XVIII 200 return value; 201 } 202 203 // Function that turns booleans into strings of ”1” or ”0”. Takes a boolean value ( b o o l v a l ) , 204 // and saves it into a string (string). 205 int bool2str(int boolval , char ∗ s t r i n g ) { 206 snprintf(string , 50, ”%d”, boolval); 207 208 i f (DEBUG==2){ 209 printf( ”bool2str: string= %s \n”,string); 210 } 211 r e t u r n 0 ; 212 } 213 214 // The application of the container, calls functions: 215 // setPWM, setGPIO, readSensorValues. Takes strings : 216 // duty , out pins , o u t values and a string to put the in−pin values (inval s t r ) . 217 int application(char ∗duty , char ∗ out pins , char ∗ o u t values , char ∗ i n v a l s t r ) { 218 printf(”Application: \n ”) ; 219 setPWM( duty ) ; 220 setGPIO( out pins , o u t v a l u e s ) ; 221 readSensorValues(inval s t r ) ; 222 r e t u r n 0 ; 223 } 224 225 // get sockaddr, IPv4 or IPv6: 226 void ∗ g e t i n addr(struct sockaddr ∗ sa ) 227 { 228 i f ( sa−>s a f a m i l y == AF INET) { 229 return &(((struct sockaddr i n ∗) sa )−>s i n a d d r ) ; 230 } 231 232 return &(((struct sockaddr i n 6 ∗) sa )−>s i n 6 a d d r ) ; 233 } 234 235 int clientReceive(char ∗ msg receive , char ∗ t h i s IP) { 236 237 i f (COMMUNICATION==1){ 238 int sockfd , numbytes; 239 char buf [MAXDATASIZE] ; 240 struct addrinfo hints , ∗ s e r v i n f o , ∗p ; 241 i n t rv ; 242 char s [INET6 ADDRSTRLEN ] ; 243 244 memset(&hints , 0, sizeof hints); 245 h i n t s . a i f a m i l y = AF UNSPEC; 246 h i n t s . a i socktype = SOCK STREAM; 247 248 if ((rv = getaddrinfo(this IP , PORT, &hints , &servinfo)) != 0) { 249 fprintf(stderr , ”getaddrinfo: %s \n ” , g a i strerror(rv)); 250 r e t u r n 1 ; 251 } 252 253 for(p = servinfo; p != NULL; p = p−>a i n e x t ) { 254 if ((sockfd = socket(p−>a i f a m i l y , p−>a i s o c k t y p e , 255 p−>a i protocol)) == −1) { 256 perror(”client: socket”); 257 continue ; 258 } 259 260 if (connect(sockfd, p−>ai addr , p−>a i a d d r l e n ) == −1) { 261 close(sockfd); 262 perror(”client: connect”); 263 continue ; 264 } 265 266 break ;

XIX 267 } 268 269 i f ( p == NULL) { 270 fprintf(stderr , ”client: failed to connect \n ”) ; 271 r e t u r n 2 ; 272 } 273 274 i n e t n t o p (p−>a i f a m i l y , g e t i n addr((struct sockaddr ∗)p−>a i addr),s, sizeof s); 275 printf(”client: connecting to %s \n ” , s ) ; 276 277 freeaddrinfo(servinfo); 278 279 if ((numbytes = recv(sockfd , buf , MAXDATASIZE−1, 0) ) == −1) { 280 perror(”recv”); 281 e x i t ( 1 ) ; 282 } 283 284 buf[numbytes] = ’ \ 0 ’ ; 285 286 s t r c p y ( msg receive , buf); 287 288 printf(”client: received ’%s’ \ n ” , m s g r e c e i v e ) ; 289 290 291 close(sockfd); 292 } 293 r e t u r n 0 ; 294 } 295 296 int clientSend(char ∗ msg send , char ∗ that IP) { 297 i f (COMMUNICATION==1){ 298 int sockfd , numbytes; 299 char buf [MAXDATASIZE] ; 300 struct addrinfo hints , ∗ s e r v i n f o , ∗p ; 301 i n t rv ; 302 char s [INET6 ADDRSTRLEN ] ; 303 char ∗ message ; 304 305 memset(&hints , 0, sizeof hints); 306 h i n t s . a i f a m i l y = AF UNSPEC; 307 h i n t s . a i socktype = SOCK STREAM; 308 309 if ((rv = getaddrinfo(that IP, PORT, &hints , &servinfo)) != 0) { 310 fprintf(stderr , ”getaddrinfo: %s \n ” , g a i strerror(rv)); 311 r e t u r n 1 ; 312 } 313 314 for(p = servinfo; p != NULL; p = p−>a i n e x t ) { 315 if ((sockfd = socket(p−>a i f a m i l y , p−>a i s o c k t y p e , 316 p−>a i protocol)) == −1) { 317 perror(”client: socket”); 318 continue ; 319 } 320 321 if (connect(sockfd, p−>ai addr , p−>a i a d d r l e n ) == −1) { 322 close(sockfd); 323 perror(”client: connect”); 324 continue ; 325 } 326 327 break ; 328 } 329 330 i f ( p == NULL) { 331 fprintf(stderr , ”client: failed to connect \n ”) ; 332 r e t u r n 2 ; 333 } 334

XX 335 i n e t n t o p (p−>a i f a m i l y , g e t i n addr((struct sockaddr ∗)p−>a i a d d r ) , 336 s, sizeof s); 337 printf(”client: connecting to %s \n ” , s ) ; 338 339 freeaddrinfo(servinfo); 340 341 342 if (send(sockfd, msg send, strlen(msg send ) , 0) < 0) { 343 printf(”send failed”); 344 r e t u r n 1 ; 345 } 346 printf(”client: sent ’%s’ \ n ” , msg send ) ; 347 348 close(sockfd); 349 } 350 r e t u r n 0 ; 351 } 352 353 // The main functon , if COMMUNICATION is set to 0/false , no communication is made. 354 i n t main ( ) { 355 setup ( ) ; 356 // msg in=”IP/ready a c c e s s /new IP/duty/out p i n s / o u t v a l u e s ” . 357 char new msg in[100], msg in[100], msg out[100], die[1],inval s t r [ 1 8 ] ; 358 char r e a d y access[2], new IP[20], duty[3], out pins[12],out values [12]; 359 char n e w r e a d y access[2], new new IP [ 2 0 ] , new duty[3], new out pins[12] ,new out values [ 1 2 ] ; 360 bool d i e bool = false , ready a c c e s s b o o l , n e w r e a d y a c c e s s b o o l ; 361 char buffer1[99], buffer2[99]; 362 363 char o l d IP[20]=”172.18.0.5”; 364 char error[]=”None”; 365 // bool ready access = false; 366 367 while ( 1 ) { 368 // receive messages from containers 369 printf(”Trying to receive \n ”) ; 370 clientReceive(msg in , o l d IP); 371 printf(”msg in %s \n ” , msg in ) ; 372 373 strcpy(buffer1 ,msg in ) ; 374 msgSplit(buffer1 ,old IP , r e a d y a c c e s s , new IP,duty,out pins , o u t v a l u e s ) ; 375 376 // r e a d y a c c e s s bool = str2bool(ready a c c e s s ) ; 377 378 i f ( new IP [ 0 ] != 48) {//( new IP != {0}) { 379 digitalWrite (7,1); 380 printf(”There is a new IP: %s \n ” , new IP); 381 clientReceive(new msg in , new IP); 382 strcpy(buffer2 ,new msg in ) ; 383 msgSplit(buffer2 ,new IP , new ready access , new new IP , new duty , new out pins , new out values ) ; 384 n e w r e a d y a c c e s s bool = str2bool(new r e a d y a c c e s s ) ; 385 } 386 387 i f ( ( n e w r e a d y a c c e s s bool==true) && (new IP[0] != 48)) { 388 printf(”The new IP is ready to access: %d\n ” , n e w r e a d y a c c e s s b o o l ) ; 389 application(new duty , new out pins , new out values, inval s t r ) ; 390 } e l s e { 391 printf(”Running application: \ n ”) ; 392 application(duty,out pins , o u t values , inval s t r ) ; 393 } 394 395 396 // send message to containers 397 i f ( new IP [ 0 ] != 48) {//( new IP != {0}) { 398 bool2str(die b o o l , d i e ) ; 399 400 msgUnify(msg out,error ,inval s t r , d i e ) ;

XXI 401 printf(”Sending to new IP: %s \n ” , msg out ) ; 402 clientSend(msg out , new IP); 403 } 404 405 i f ( n e w r e a d y a c c e s s bool == true) { 406 d i e b o o l=t r u e ; 407 bool2str(die b o o l , d i e ) ; 408 msgUnify(msg out,error ,inval s t r , d i e ) ; 409 printf(”Sending to die to old IP: %s \n ” , msg out ) ; 410 clientSend(msg out , o l d IP); 411 digitalWrite (7,0); 412 d i e bool=false ; 413 s t r c p y ( old IP , new IP); 414 printf(”The new IP is set as standard IP: %s \n ” , o l d IP); 415 n e w r e a d y a c c e s s bool = false; 416 } 417 bool2str(die b o o l , d i e ) ; 418 msgUnify(msg out,error ,inval s t r , d i e ) ; 419 printf(”Sending: %s \n ” , msg out ) ; 420 clientSend(msg out , o l d IP); 421 422 i n t p=0; 423 i f (COMMUNICATION==0){ 424 f o r ( p = 0 ; p < 80000000; ++p); 425 } 426 }// end while 427 r e t u r n 0 ; 428 }// end main

XXII gpiosetup.sh:

1 #!/bin/bash 2 3 for PIN in 21 22 23 24 25 26 27 28 29 4 do 5 gpio mode $PIN up 6 Pullup on $PIN 7 done

pwmsetup.sh:

1 #!/bin/bash 2 3 i f [ −z ”$1” ]; then 4 # echo using deault value: 48 5 DUTY=48 6 f i 7 8 i f [ −n ”$1” ];then 9 DUTY=$1 10 # echo ”Duty is set to: ” 11 # echo $1 12 i f [ ”$DUTY” −l t 48 ] ; then 13 DUTY=48 14 echo ”pwmsetup: duty of $1 is too low, set to minimum: 48 ” 15 f i 16 i f [ ”$DUTY” −gt 96 ] ; then 17 DUTY=96 18 echo ”pwmsetup: duty of $1 is too high, set to maximum: 96 ” 19 f i 20 f i 21 22 gpio mode 1 pwm 23 gpio pwm−ms 24 gpio pwmc 400 25 gpio pwmr 1000 26 gpio pwm 1 $DUTY

setPWM.sh:

1 #!/bin/bash 2 3 i f [ −z ”$1” ]; then 4 # echo using deault value: 48 5 DUTY=48 6 f i 7 8 i f [ −n ”$1” ];then 9 DUTY=$1 10 # echo ”Duty is set to: ” 11 # echo $1 12 i f [ ”$DUTY” −l t 48 ] ; then 13 DUTY=48 14 echo ”pwmsetup: duty of $1 is too low, set to minimum: 48 ” 15 f i 16 i f [ ”$DUTY” −gt 96 ] ; then 17 DUTY=96 18 echo ”pwmsetup: duty of $1 is too high, set to maximum: 96 ” 19 f i 20 f i 21 22 gpio pwm 1 $DUTY

XXIII D.3 Application Containers

1 ////////////////////////////////////////////////////////////////////////////////// 2 // old . c 3 // Written by Elin MK Nordmark and Sandra Aidanpaa 16 May 2016 4 // 5 // The client and server functions in this code are based on source code found in 6 // ”Beej’s Guide to Network Programming”, written by Brian ”Beej Jorgensen” Hall. 7 // 8 ///////////////////////////////////////////////////////////////////////////////// 9 10 #i n c l u d e 11 #i n c l u d e 12 #i n c l u d e 13 #i n c l u d e 14 #i n c l u d e 15 #i n c l u d e 16 #i n c l u d e 17 #i n c l u d e 18 #i n c l u d e 19 #i n c l u d e 20 #i n c l u d e 21 #i n c l u d e 22 #i n c l u d e 23 #i n c l u d e 24 25 #d e f i n e DEBUG 2 26 #define true 1 27 #define false 0 28 #define PORT1 ”3487” // the port GPIO will be connecting to 29 #define PORT2 ”3032” // the port NEW will be connecting to 30 #define BACKLOG 10 // how many pending connections queue will hold 31 #define MAXDATASIZE 100 // max number of bytes we can get at once 32 #define COMMUNICATION 1 // Set to 1 for communication 33 34 typedef int bool; 35 bool update = false; 36 char msg out[100]=”0/0/0/0/0/0”, msg in[100]=”0/0/0”, containerData in[100]=”0/0/0”, containerData out[100]=”0/0/0”; 37 char ∗ o l d I P = NULL; 38 char new IP [ ] = ” 0 ” ; 39 40 // This is the code for the container named Old, the code for container New 41 // is exactly the same except for the values sent. 42 43 44 void ∗ stressTest(void ∗ arg ) { 45 46 char ∗ t e s t s t r ; 47 t e s t s t r = ( char ∗) arg ; 48 p r i n t f (”% s \n ” , t e s t s t r ) ; 49 50 system(”stress −−cpu 10 −−i o 4 −−vm 4 −−vm−bytes 128M −t 60 s ”) ; 51 } 52 53 54 // get sockaddr, IPv4 or IPv6: 55 void ∗ g e t i n addr(struct sockaddr ∗ sa ) { 56 i f ( sa−>s a f a m i l y == AF INET) { 57 return &(((struct sockaddr i n ∗) sa )−>s i n a d d r ) ; 58 } 59 60 return &(((struct sockaddr i n 6 ∗) sa )−>s i n 6 a d d r ) ; 61 } 62 63 64 void ∗ updateServer(void ∗ arg ) { 65

XXIV 66 char ∗ t e s t s t r ; 67 t e s t s t r = ( char ∗) arg ; 68 p r i n t f (”% s \n ” , t e s t s t r ) ; 69 70 int sockfd, new fd, numbytes; // listen on sock fd , new connection on new fd 71 struct addrinfo hints , ∗ s e r v i n f o , ∗p ; 72 struct sockaddr storage their addr; // connector ’s address information 73 struct sockaddr i n some addr ; 74 s o c k l e n t s i n s i z e ; 75 i n t yes =1; 76 char s [INET6 ADDRSTRLEN ] ; 77 i n t rv ; 78 79 memset(&hints , 0, sizeof hints); 80 h i n t s . a i f a m i l y = AF UNSPEC; 81 h i n t s . a i socktype = SOCK STREAM; 82 h i n t s . a i f l a g s = AI PASSIVE; // use my IP 83 84 if ((rv = getaddrinfo(NULL, PORT2, &hints , &servinfo)) != 0) { 85 fprintf(stderr , ”getaddrinfo: %s \n ” , g a i strerror(rv)); 86 e x i t ( 1 ) ; 87 } 88 89 for(p = servinfo; p != NULL; p = p−>a i n e x t ) { 90 if ((sockfd = socket(p−>a i f a m i l y , p−>a i s o c k t y p e , 91 p−>a i protocol)) == −1) { 92 perror(”server: socket”); 93 continue ; 94 } 95 if (setsockopt(sockfd , SOL SOCKET, SO REUSEADDR, &yes , 96 sizeof(int)) == −1) { 97 perror(”setsockopt”); 98 e x i t ( 1 ) ; 99 } 100 if (bind(sockfd, p−>ai addr , p−>a i a d d r l e n ) == −1) { 101 close(sockfd); 102 perror(”server: bind”); 103 continue ; 104 } 105 break ; 106 } 107 freeaddrinfo(servinfo); 108 109 i f ( p == NULL) { 110 fprintf(stderr , ”server: failed to bind \n ”) ; 111 e x i t ( 1 ) ; 112 } 113 if (listen(sockfd , BACKLOG) == −1) { 114 perror(”listen”); 115 e x i t ( 1 ) ; 116 } 117 118 119 printf(”updateserver: waiting for connections... \ n ”) ; 120 121 char update ok [ 2 0 0 0 ] ; 122 char ∗ update check ; 123 update check = ”update”; 124 125 while ( 1 ) { 126 127 s i n size = sizeof their a d d r ; 128 new fd = accept(sockfd , (struct sockaddr ∗)&t h e i r a d d r , &s i n s i z e ) ; 129 i f ( new fd == −1) { 130 perror(”accept”); 131 continue ; 132 } 133

XXV 134 i n e t n t o p ( t h e i r a d d r . s s f a m i l y , 135 g e t i n addr((struct sockaddr ∗)&t h e i r addr),s, sizeof s); 136 137 printf(”updateserver: got connection from %s on port %d\n”, s, htons(some addr . s i n p o r t ) ) ; 138 139 if ((numbytes = recv(new fd , update ok, 2000, 0)) == −1){ 140 perror(”recv”); 141 e x i t ( 1 ) ; 142 } 143 update ok[numbytes] = ’ \ 0 ’ ; 144 printf(”updateserver receives: %s \n ” , update ok ) ; 145 146 if (strcmp(update check , update ok ) == 0) { 147 if (strcmp(new IP, ”0”)==0){ 148 s t r c p y ( new IP , s ) ; 149 } 150 } 151 printf(”updateServer receives New IP: %s \n ” , new IP); 152 153 if (send(new fd, containerData out , strlen(containerData out ) , 0) == −1){ 154 perror(”send”); 155 } 156 157 } 158 close(sockfd); 159 r e t u r n 0 ; 160 } 161 162 int updateClient(char ∗IP[]) { 163 164 int sockfd , numbytes; 165 struct addrinfo hints , ∗ s e r v i n f o , ∗p ; 166 i n t rv ; 167 char s [INET6 ADDRSTRLEN ] ; 168 char ∗ message ; 169 170 memset(&hints , 0, sizeof hints); 171 h i n t s . a i f a m i l y = AF UNSPEC; 172 h i n t s . a i socktype = SOCK STREAM; 173 174 if ((rv = getaddrinfo(IP[0], PORT2, &hints , &servinfo)) != 0) { 175 fprintf(stderr , ”getaddrinfo: %s \n ” , g a i strerror(rv)); 176 r e t u r n 1 ; 177 } 178 179 for(p = servinfo; p != NULL; p = p−>a i n e x t ) { 180 if ((sockfd = socket(p−>a i f a m i l y , p−>a i s o c k t y p e , 181 p−>a i protocol)) == −1) { 182 perror(”client: socket”); 183 continue ; 184 } 185 186 if (connect(sockfd, p−>ai addr , p−>a i a d d r l e n ) == −1) { 187 close(sockfd); 188 perror(”client: connect”); 189 continue ; 190 } 191 192 break ; 193 } 194 195 i f ( p == NULL) { 196 fprintf(stderr , ”client: failed to connect \n ”) ; 197 r e t u r n 2 ; 198 } 199 200 i n e t n t o p (p−>a i f a m i l y , g e t i n addr((struct sockaddr ∗)p−>a i addr),s, sizeof s);

XXVI 201 printf(”client: connecting to %s \n ” , s ) ; 202 203 freeaddrinfo(servinfo); 204 205 message = ”update”; 206 if (send(sockfd , message, strlen(message), 0) < 0) { 207 printf(”send failed”); 208 r e t u r n 1 ; 209 } 210 211 if ((numbytes = recv(sockfd , containerData in, 100, 0)) == −1) { 212 perror(”recv”); 213 e x i t ( 1 ) ; 214 } 215 216 // numbytes is number of bytes read into the buffer, which is void ∗ buf 217 containerData in[numbytes] = ’ \ 0 ’ ; 218 219 printf(”client: received ’%s’ \ n”, containerData i n ) ; 220 221 222 close(sockfd); 223 224 r e t u r n 0 ; 225 } 226 227 void ∗ server(void ∗ arg ) { 228 229 char ∗ t e s t s t r ; 230 t e s t s t r = ( char ∗) arg ; 231 p r i n t f (”% s \n ” , t e s t s t r ) ; 232 233 int sockfd, new fd, numbytes; // listen on sock fd , new connection on new fd 234 struct addrinfo hints , ∗ s e r v i n f o , ∗p ; 235 struct sockaddr storage their addr; // connector ’s address information 236 struct sockaddr i n some addr ; 237 s o c k l e n t s i n s i z e ; 238 i n t yes =1; 239 char s [INET6 ADDRSTRLEN ] ; 240 i n t rv ; 241 242 memset(&hints , 0, sizeof hints); 243 h i n t s . a i f a m i l y = AF UNSPEC; 244 h i n t s . a i socktype = SOCK STREAM; 245 h i n t s . a i f l a g s = AI PASSIVE; // use my IP 246 247 if ((rv = getaddrinfo(NULL, PORT1, &hints , &servinfo)) != 0) { 248 fprintf(stderr , ”getaddrinfo: %s \n ” , g a i strerror(rv)); 249 e x i t ( 1 ) ; 250 } 251 252 for(p = servinfo; p != NULL; p = p−>a i n e x t ) { 253 if ((sockfd = socket(p−>a i f a m i l y , p−>a i s o c k t y p e , 254 p−>a i protocol)) == −1) { 255 perror(”server: socket”); 256 continue ; 257 } 258 if (setsockopt(sockfd , SOL SOCKET, SO REUSEADDR, &yes , 259 sizeof(int)) == −1) { 260 perror(”setsockopt”); 261 e x i t ( 1 ) ; 262 } 263 if (bind(sockfd, p−>ai addr , p−>a i a d d r l e n ) == −1) { 264 close(sockfd); 265 perror(”server: bind”); 266 continue ; 267 } 268 break ;

XXVII 269 } 270 freeaddrinfo(servinfo); 271 272 i f ( p == NULL) { 273 fprintf(stderr , ”server: failed to bind \n ”) ; 274 e x i t ( 1 ) ; 275 } 276 if (listen(sockfd , BACKLOG) == −1) { 277 perror(”listen”); 278 e x i t ( 1 ) ; 279 } 280 281 printf(”server: waiting for connections... \ n ”) ; 282 283 char buffer[1024]; 284 285 while ( 1 ) { 286 287 s i n size = sizeof their a d d r ; 288 new fd = accept(sockfd , (struct sockaddr ∗)&t h e i r a d d r , &s i n s i z e ) ; 289 i f ( new fd == −1) { 290 perror(”accept”); 291 continue ; 292 } 293 294 i n e t n t o p ( t h e i r a d d r . s s f a m i l y , 295 g e t i n addr((struct sockaddr ∗)&t h e i r addr),s, sizeof s); 296 printf(”server: got connection from %s on port %d\n”, s, htons(some addr . s i n p o r t ) ) ; 297 298 if (send(new fd , msg out, strlen(msg out ) , 0) == −1) 299 perror(”send”); 300 printf(”The server sends: %s \n ” , msg out ) ; 301 302 if ((numbytes = recv(new fd, buffer, 1024, 0)) == −1){ 303 perror(”recv”); 304 e x i t ( 1 ) ; 305 } 306 307 buffer[numbytes] = ’ \ 0 ’ ; 308 if (numbytes > 0) { 309 s t r c p y ( msg in, buffer); 310 } 311 printf(”The server receives: %s \n ” , msg in ) ; 312 printf(”Server knows New IP is: %s \n ” , new IP); 313 } 314 315 close(sockfd); 316 r e t u r n 0 ; 317 } 318 319 320 // Function to test the application of this container. Can be extended as needed. 321 bool test(char ∗ r e a d y a c c e s s ) { 322 i f (DEBUG==1){ 323 printf( ”Testing...”) ; 324 } 325 strcpy(ready access , ”1”); 326 return false; 327 } 328 329 // Function to make a string out of an array of integers. Takes an array (array) 330 // with values to transform, an empty string (string) to put the values in, 331 // and the size of the array (size). Returns 0. 332 int array2str(int ∗ array , char ∗ string, int size) { 333 i n t i ; 334 char buf[16]; 335 snprintf(string , 50, ”%d”, array[0]); 336 i f ( s i z e >1){

XXVIII 337 f o r ( i = 1 ; i < s i z e ; ++i ) { 338 snprintf(buf, 16, ”%d”, array[i]); 339 strcat(string , ”,”); 340 strcat(string , buf); 341 memset(buf ,0, strlen(buf)); 342 } 343 } 344 i f (DEBUG==1){ 345 printf( ”string= %s \n”,string); 346 } 347 r e t u r n 0 ; 348 } 349 350 // Function that unifies the outgoing message (out) from strings: 351 // IP , r e a d y a c c e s s , new IP, duty, out p i n s and o u t values. Returns 0. 352 int msgUnify(char ∗out , char ∗IP , char ∗ r e a d y a c c e s s , /∗ char ∗new IP, ∗ / char ∗duty , char ∗ out pins , char ∗ o u t v a l u e s ) { 353 strcpy(out,””) ; 354 snprintf(out,100 , ”%s/%s/%s/%s/%s/%s”,IP, ready a c c e s s , new IP,duty,out pins , o u t v a l u e s ) ; 355 i f (DEBUG==1){ 356 printf(” msgUnify: out= %s \n ” , out ) ; 357 } 358 r e t u r n 0 ; 359 } 360 361 // Function that unifies the outgoing message for updating (out) from strings: 362 // p i n i n , IP network and state. Returns 0. 363 int msgUnifyUpdate(char ∗out , char ∗ p i n i n , char ∗ IP network ,char ∗ s t a t e ) { 364 strcpy(out,””) ; 365 snprintf(out,100 , ”%s/%s/%s”,pin i n , IP network ,state); 366 i f (DEBUG==1){ 367 printf(” msgUnifyUpdate: out= %s \n ” , out ) ; 368 } 369 r e t u r n 0 ; 370 } 371 372 // Function that splits the incoming message string for updating (in) and saves in s t r i n g s : 373 // p i n i n , IP network and state. Returns 0. 374 int msgSplitUpdate(char ∗ in , char ∗ p i n i n , char ∗ IP network ,char ∗ s t a t e ) { 375 s t r c p y ( p i n in ,strtok(in, ”/”)); 376 s t r c p y ( IP network , strtok(NULL, ”/”)); 377 strcpy(state ,strtok(NULL, ”/”)); 378 379 i f (DEBUG==1){ 380 printf(” pin i n : %s \n ” , p i n i n ) ; 381 p r i n t f (” IP network : %s \n ” , IP network ) ; 382 printf(” state: %s \n ” , s t a t e ) ; 383 } 384 385 r e t u r n 0 ; 386 } 387 388 // Function that splits the incoming message string (in) and saves in strings: 389 // errors, pin in and die. Returns 0. 390 int msgSplit(char ∗ in , char ∗ errors , char ∗ p i n i n , char ∗ d i e ) { 391 strcpy(errors ,strtok(in, ”/”)); 392 s t r c p y ( p i n in ,strtok(NULL, ”/”)); 393 strcpy(die ,strtok(NULL, ”/”)); 394 395 i f (DEBUG==1){ 396 printf(” errors: %s \n”, errors); 397 printf(” pin i n : %s \n ” , p i n i n ) ; 398 printf(” die: %s \n ” , d i e ) ; 399 } 400 401 r e t u r n 0 ;

XXIX 402 } 403 404 // // The application of the container. 405 void ∗ application(void ∗ r e a d y a c c e s s ) { 406 407 i f (DEBUG==2){ 408 printf(”ready a c c e s s : %s \n ” , r e a d y a c c e s s ) ; 409 } 410 i n t n , PWM duty[4] , GPIOwrite[12] , GPIOwrite vals [ 6 ] , p ; 411 char die[2]=”0”, errors[30]=”None”,pin in[18]=”0”,IP network[30]=”172.18.0.3”, state [ 2 ] = ” 0 ” ; 412 char buffer1[99], buffer2[99]; 413 bool d i e bool = false , app ok ; 414 415 char duty[] = ”77”; 416 char o u t pins[] =”2,55,4,3”; 417 char o u t values[] =”1,0,0,1”; 418 char my IP[]=”172.18.0.5”; 419 420 421 while ( 1 ) { 422 423 printf(”New round \n ”) ; 424 // server2GPIO: 425 msgUnify(msg out , my IP , r e a d y access ,duty,out pins , o u t values); // also nr 4 new IP 426 printf(”msg out : %s \n ” , msg out ) ; 427 428 // msg from GPIO: 429 printf(”In application msg in i s : %s \n ” , msg in ) ; 430 strcpy(buffer1 ,msg in ) ; 431 432 msgSplit(buffer1 , errors ,pin i n , d i e ) ; 433 i f (DEBUG==2){ 434 printf(”After split msg in :% s \n ” , msg in ) ; 435 } 436 437 // //updateServer2updateClient SEND 438 msgUnifyUpdate(containerData out , p i n i n , IP network ,state); 439 printf(”containerData out : %s \n”,containerData out ) ; 440 441 // //updateClient2updateServer RECEIVE 442 strcpy(buffer2 ,containerData i n ) ; 443 444 msgSplitUpdate(buffer2 , pin i n , IP network ,state); 445 i f (DEBUG==2){ 446 printf(”After split containerData i n :% s \n”, containerData i n ) ; 447 } 448 449 450 if (update == true) { 451 update=test(ready a c c e s s ) ; 452 } 453 454 i f (COMMUNICATION==0){ 455 f o r ( p = 0 ; p < 80000000; ++p); 456 } 457 }// end while 458 r e t u r n 0 ; 459 }// end main 460 461 // The main functon , if COMMUNICATION is set to 0/false , no communication is made. 462 int main(int argc, char ∗ argv [ ] ) { 463 464 char r e a d y a c c e s s [ 2 ] ; 465 const char ∗ pth msg0 = ”Testing stress thread”; 466 const char ∗ pth msg1 = ”Testing server thread”; 467 const char ∗ pth msg2 = ”Testing updateServer thread”; 468 p t h r e a d t p t h s e r v e r , pth updateserver , pth app , p t h s t r e s s ;

XXX 469 470 if (argc==1){ 471 update = false; 472 printf(”update = %d\n”,update); 473 strcpy(ready access ,”1”); 474 } e l s e { 475 update = true; 476 strcpy(ready access ,”0”); 477 printf(”update = %d\n”,update); 478 o l d IP = argv[1]; 479 printf(”old I P = %s \n ” , o l d IP); 480 i f (COMMUNICATION==1){ 481 i n t u p d a t e res = updateClient(&old IP); 482 } 483 } 484 485 i f (COMMUNICATION==0){ 486 application(ready a c c e s s ) ; 487 } 488 i f (COMMUNICATION==1){ 489 p t h r e a d create(&pth server , NULL, server , (void ∗) pth msg1 ) ; 490 p t h r e a d create(&pth updateserver , NULL, updateServer , (void ∗) pth msg2 ) ; 491 p t h r e a d create(&pth app, NULL, application , (void ∗) r e a d y a c c e s s ) ; 492 p t h r e a d create(&pth stress , NULL, stressTest , (void ∗) pth msg0 ) ; 493 494 p t h r e a d j o i n ( p t h server , NULL); 495 p t h r e a d j o i n ( pth updateserver , NULL); 496 p t h r e a d j o i n ( pth app , NULL) ; 497 p t h r e a d j o i n ( p t h stress , NULL); 498 } 499 r e t u r n 0 ; 500 }

XXXI

www.kth.se