Universal Integration of the Internet of Things through an IPv6-based Service Oriented Architecture enabling heterogeneous components interoperability

Grant agreement for: Collaborative project Grant agreement no.: 288445

Start date of project: October 1st, 2011 (36 months duration) Deliverable D3.1 Look-up/discovery, context-awareness, and resource/services directory Contract Due Date 30/09/2012

Submission Date 30/09/2012

Version 1.0

Responsible Partner University of Murcia

Author List A. Jara, A. Skarmeta, P. López, D. Fernandez, S. Krco, B. Pokric, P. Martinez-Julia, R. Marin-Perez, M. Izquierdo. Dissemination level PU

Keywords Internet of Things, IPv6, Service Discovery, Dissemination

Project Coordinator: Mandat International (MI) Sébastien Ziegler [email protected] D3.1 Open Service Layer

Abstract This document presents a lightweight multicast DNS (lmDNS) for IPv6-enabled Smart Objects, and also presents a global discovery architecture interoperable with DNS called digcovery, which is accessible via www.digcovery.net.

Digcovery architecture presents as different technologies involved in the Internet of Things such as Smart Objects, RFID tags, and legacy devices are integrated into different digrectories. These digrectories are managed through DNS-queries extended with an elastic-based search engine in order to make it scalable at the same time that this offers a centralized point, called digcovery core, to manage and discover them.

All the resources and services are mapped to a common ontology and description based on existing ontologies (SSN) and profiles (IPSO), and compatible with DNS-SD types, in order to reach a common semantic description accessible through DNS.

This also presents how to interoperate with the discovery architecture through other interfaces different to DNS such as JSON, RLUS and GSN M2M platform.

The usage of the platform can be through DNS in order to exploit existing IP-based technologies, protocols and mechanisms, but this also presents how to carry out look-up and queries (ElasticSearchs) over digcovery with context awareness, based on location or resource types, over the proposed ElasticSearch architecture, which offers organized and context- based queries over a heterogeneous and distributed source of resources and services.

Finally, this document explains how to manage security and privacy through access control to the services and its associated attributed and resources.

2

D3.1 Open Service Layer

Table of Contents

Executive summary ...... 8 General overview ...... 8 Summary of the proposed advantages ...... 9 Lightweight multicast DNS and Service Directory (lmDNS) Overview ...... 9 Scalable Domain handling architecture ...... 9 Integration on legacy and non-IPv6 devices ...... 10 Integration on EPCIS for RFID and Handle System for DOI ...... 10 Integration of digcovery with other middleware and data-focused platforms such as Global Sensor Networks (GSN) from OpenIoT ...... 10 1 Introduction ...... 12 1.1 Purpose and scope of the document ...... 12 1.2 Key components of the architecture proposal ...... 12 2 Literature review: Look-up/discovery, context awareness and resource repository solutions ...... 15 2.1 Local Service/Resource Directory ...... 16 2.1.1 DNS Service Directory and multicast DNS ...... 16 2.1.2 Resource Directory based on CoAP (RD) ...... 17 2.1.3 Resource Directory to DNS-SD and mDNS mapping ...... 17 2.1.4 Example of Resource Directory to DNS-SD and mDNS mapping ...... 18 2.1.5 DNS-SD/mDNS and CoAP Resource Directory/Discovery main differences .. 22 2.2 Global look-up and discovery systems ...... 23 2.2.1 Simple look-up/resolution systems ...... 23 2.2.2 Overlay Network and Distributed Hash Table (DHT) ...... 25 2.2.3 Ontology-driven Semantic Systems ...... 29 2.2.4 Hybrid Systems ...... 29 3 Design Issues and Requirements from the Internet of Things & IoT6 Architecture 32 3.1 Scalability ...... 32 3.2 Dynamic...... 32 3.3 Sleep mode awareness ...... 32 3.4 Payload and frame size constraints ...... 32 3.5 Global access and query capabilities ...... 33 3.6 Multi device operations ...... 33 3.7 Based on existing Internet technologies ...... 33 3.8 Semantic description...... 33

3

D3.1 Open Service Layer 4 Open Service Architecture Proposal: Global Resource Directory and Service Discovery ...... 34 4.1 Overview ...... 34 4.2 General Architecture ...... 34 4.3 Components description ...... 36 4.3.1 Smart Object discovery protocol ...... 36 4.3.2 Local Resource Directory ...... 36 4.3.3 Global Resource and Service Directory ...... 36 4.4 Integrating resources (things, devices and tags) ...... 41 5 IPv6-based Smart Object discovery protocol ...... 43 5.1 Lightweight Look-up and discovery (lmDNS) ...... 43 5.1.1 Functionality description ...... 43 5.1.2 Satisfaction of the defined design issues ...... 47 5.2 Light-weight Resource and services directory (DNS-SD) ...... 50 5.2.1 DNS-Based Service Discovery Records ...... 50 5.2.2 DNS group ...... 51 5.2.3 Starting CoAP devices ...... 51 5.2.4 Proxy discovery ...... 51 5.2.5 Network architecture for DNS-SD through CoAP ...... 52 6 Semantic services description ...... 53 6.1 Ontology-based Resource Description and Discovery Framework ...... 53 6.1.1 OpenIoT ...... 54 6.1.2 SENSEI ...... 55 6.1.3 SSN-XG ...... 55 6.1.4 IoT-A ...... 56 6.1.5 IoT.est ...... 57 6.2 Semantic Descriptions for the Internet of Things ...... 58 6.2.1 IPSO Alliance Interfaces (IETF) ...... 58 6.2.2 Representing CoRE Link Collections in JSON ...... 62 6.2.3 SPITFIRE: Semantic Web of Things ...... 63 6.2.4 EXI: Efficient XMl Interchange ...... 64 6.2.5 oBIX: Open Building Information Xchange ...... 65 6.3 Comparative table between Data Exchange Technologies on Internet of Things ..... 66 7 Search Engine: context awareness ...... 67 7.1 Elastic Search ...... 67 7.1.1 Query DSL ...... 67 7.1.2 Filters and Caching ...... 68 7.1.3 Mapping Types ...... 68 7.1.4 Indexing Data Example ...... 69 7.1.5 Searching Data Example ...... 69 7.1.6 ElacticSearch in Digcovery ...... 71

4

D3.1 Open Service Layer 8 Communications interfaces and management functions ...... 73 8.1 RLUS management interface over UDP ...... 75 8.2 JSON – Java interface between digcovery and the digrectories ...... 76 8.3 CoAP Resource Directory ...... 76 8.4 GSN interface ...... 76 9 Proposed discovery mechanism (digcovery protocol) ...... 79 9.1 Discovery phases ...... 79 9.2 Registration ...... 81 9.3 Resource and service discovery ...... 81 9.4 Extending discovery to non-IP clusters ...... 82 10 Privacy and access management ...... 84 10.1 Access Control Lists ...... 85 10.2 Role Access Control mechanism ...... 85 10.3 Attribute Based Access Control ...... 86 10.4 Other approaches ...... 86 10.5 Digcovery approach summary ...... 86 11 Conclusions ...... 88 References ...... 89

5

D3.1 Open Service Layer

List of Tables Table 1: Example of ecobus resource directory record ...... 18 Table 2: Mapping ecobus resource directory record to VM Lab2 service DNS record ...... 18 Table 3: PTR record for light_lab ...... 43 Table 4: Discovering a type of object through mDNS ...... 44 Table 5: Looking up the service associated with the light found ...... 45 Table 6: TXT entries with the extra information of the found light ...... 45 Table 7: TXT query of the found light in a single TXT record ...... 46 Table 8: AAAA query of the found light ...... 46 Table 9: SRV query of the found light without optimizations ...... 47 Table 10: TXT query of the found light ...... 49 Table 11: Discover services example ...... 58 Table 12: Defined interfaces in draft-shelby-core-interfaces-03 ...... 58 Table 13: Example of using the Link List interface ...... 59 Table 14: Example of using the Batch Interface ...... 59 Table 15: Example of using the Linked Batch interface ...... 60 Table 16: Example of using the Sensor interface ...... 60 Table 17: Example of using the Parameter interface ...... 60 Table 18: Example of using the Read-only Parameter interface ...... 61 Table 19: Example of using the Actuator interface ...... 61 Table 20: Example of using the Binding interface ...... 61 Table 21: Observable parameters ...... 62 Table 22: Example of using the Observation request ...... 62 Table 23: Example of mapping ...... 63 Table 24: Example of SPARQL query ...... 64 Table 25: Example of using oBIX to read a value ...... 65 Table 26: Data exchange technologies for the Internet of Things ...... 66

6

D3.1 Open Service Layer List of Figures Figure 1: Key components of the architecture proposal to enable the Open Service Layer ...... 13 Figure 2: Experiment setup with highlighted tested elements ...... 19 Figure 3: Avahi-discovery screen on VM Lab1 showing service parameters ...... 20 Figure 4: Registration of devices with mDNS Avahi ...... 21 Figure 5: Resource and service discovery showing RD to mDNS Avahi look-up and service description update ...... 22 Figure 6: Architecture overview of the service discovery infrastructure ...... 35 Figure 7: Central approach, such as the used for CoAP Resource Directory ...... 37 Figure 8: Distributed approach, such as the used for OpenDHT ...... 38 Figure 9: Overview of the mDNS / DNS-SD Connector, showing how to publish globally the resources and devices found on each local network ...... 39 Figure 10: Classic M2M interaction between Things and Client based on proxy approach ...... 40 Figure 11: Example of the mobile application that is currently being developed ...... 40 Figure 12: New Dynamic, flexible and elliptic approach from Digcovery ...... 41 Figure 13: Resources ecosystem ...... 42 Figure 14: Avahi discovery ...... 44 Figure 15: DNS-SD through CoAP interaction ...... 52 Figure 16: General APIs view ...... 74 Figure 17: Registration of devices and DNS domains ...... 81 Figure 18: Resource and service discovery ...... 82 Figure 19: EPCIS Query through digcovery system ...... 83

7

D3.1 Open Service Layer

Executive summary

General overview It is predicted in the Internet of Things (IoT) [1], that over 50 billion devices will be connected to the Internet by 2020. Therefore, a high scalability requirement to manage every resource connected to the network is needed along with a high capability for the autonomous registration and discovery of resources and services. This should be dynamically adapted as new devices are included into the network and changes are made to the existing ones. Currently, the most extended discovery architecture for the Internet is the (DNS). DNS offers through its extensions of multicast DNS (mDNS) and DNS Service Directory (DNS-SD), the query and discovery of services by type and properties. Initial work on mDNS and DNS-SD for the discovery of things has already taken place and been shown to be able to satisfy (i) the discovery of resources from the IoT point of view and (ii) the discovery of services, i.e. WebServices such as CoAP, from the Web of Things point of view. However, at this time, a complete architecture has not yet been proposed which manages the global discovery, the local directories and a search engine adequate for the requirements and use cases of the Internet of Things. A detailed analysis of the impact of DNS for Smart Objects is needed, as well as other parallel issues such as semantics for the description of services and resources, access control policies for security and privacy, and its interconnection with the current M2M platforms (based on the cloud) and mobile clients through an Open Service Layer. Towards this goal, this document presents a lightweight multicast DNS (lmDNS) for IPv6- enabled Smart Objects, since mDNS cannot be directly applied, because these protocols are designed for host-based requirements, where they are not taking into account the design issues and constraints from Smart Objects. This work also presents a global discovery architecture interoperable with DNS called digcovery and accessible via www.digcovery.net. Individual drivers have been designed to interconnect different kinds of objects, things, devices, sensors and tags (RFID, Handle System, legacy technologies, etc…). Finally, a search engine, an access control policies, and a set of management functions are proposed. All these elements contribute towards the key purpose in the IoT6 project, to build an Open Service Layer which makes feasible its full integration into IPv6 architecture through protocols such as DNS, and other communication interfaces which define the Open Service Layer.

IoT6 Open Service Layer (digcovery) enables that Smart objects can be discoverable, accessible, available, usable, and interoperable through IPv6 technologies

8

D3.1 Open Service Layer Summary of the proposed advantages Before describing the architecture, Open Service Layer, mechanisms, and APIs proposed to support a global and scalable discovery, this Executive Summary offers a brief overview of the main advantages of the proposed solution.

Lightweight multicast DNS and Service Directory (lmDNS) Overview First, we are proposing an IP(v6)-oriented discovery mechanism, i.e. a solution based on the DNS protocol, which is the main protocol for discovery from the current and Future Internet. However, we also consider mDNS for local self-discovery based on multicast messages, such as is carried out by commercial solutions such as Bonjour in Apple products, and Avahi for Linux-based platforms. Finally, we also consider DNS Service Discovery (DNS-SD) for solutions with infrastructure, i.e. with a resource directory in the architecture. The main reason to propose this DNS-based solution and not the CoAP Resource Directory proposed by the IETF Working Group, is because the original protocol for discovery is DNS, and support for it can be found in several platforms, Operating Systems, routers, and networking devices. For example, one can run a simple nslookup in a Windows platform, or dig in a Linux or Mac OS platform to initiate it. This is thanks to the fact that DNS is provided with the kernel of the Operating Systems, since it is part of the IP(v6) family of technologies. The main reason that CoAP Resource Directory instead of DNS-SD is currently considered by the CoRE working group at the IETF is to unify everything in a unique protocol, i.e. the CoAP protocol. This makes sense when you want to reduce the footprint of your firmware for the constrained sensors used in Smart Environments. It is less expensive to use CoAP for everything than if you require to implement both CoAP and DNS. For this reason, we have developed a very lightweight implementation of DNS, re-using a lot of the components from CoAP and 6LoWPAN stacks, in order to reduce as much as possible the impact in the footprint of the firmware. In addition, a set of new designs and optimizations have been proposed to reach a reduced control overhead; it has been the called lightweight multicast DNS (lmDNS) protocol, which satisfies the requirements and design issues from Internet of Things and Constrained Applications. You can find more details about lmDNS in the Section 3 of this document, and more details and links to the implementation in Deliverable D3.3.

Scalable Domain handling architecture Global resource and service discovery architectures require managing the different domains within a single management system. We have proposed a core management system for the discovery, called digcovery, since it is based on DNS (dig command in Linux OS and MAC OS system). Digcovery is public and accessible from anyplace through digcovery.net. Digcovery allows the delegation of each domain to the end-user and/or service provider through what is called digrectories. Each directory is able to interact with the local devices and smart things, protecting the accessible and publishable resources from the local domain. The most significant aspect is that it allows for the publishing and linking of the directories to digcovery, which is globally accessible and allows global discoveries by considering the resources from all the digrectories

9

D3.1 Open Service Layer around the world. Note, that digcovery is really a cloud-based platform. Therefore, it is highly accessible and scalable around the world. These local resource directories allow the services offered by the one specific domain/deployment devices to be available to other devices/users (i.e. Global Service and Resource discovery). Further information about the digcovery and digrectory architecture can be found in Section 2.

Integration on legacy and non-IPv6 devices The aforementioned domain handling architecture has also been proposed to offer IPv6 addressing for the non-IPv6 and legacy devices through the IPv6 Addressing Proxy (see Deliverable D2.1). Specifically, digrectories have been defined which act as drivers between native services interfaces and CoAP-based interface, in order to map to the digcovery interfaces. Thereby domains and subnets with different physical layer technologies using CoAP and similar naming conventions are supported. The digrectories adapt the legacy or proprietary devices from the subnets/domains with different physical layer technologies using CoAP to integrate the different application layer protocols and different naming conventions. For example, CoAP over BACnet, CoAP over Konnex/EIB, or CoAP over X10.

Integration on EPCIS for RFID and Handle System for DOI This has been integrated with Electronic Product Code Information System (EPCIS) presented in Deliverable D6.1. Thereby, physical objects which have attached a RFID tag as a smart thing can be found. Several properties and features of these objects are gathered through the extended description offered by the EPCIS. In parallel with the effort being carried out for RFID through the Electronic Product Codes (EPC), for the project is also working on Digital Object Identifiers (DOI) through the Handle System. Therefore, the concept can be also applied for books, documents, movies (DVDs, BRs), and music (CDs, MP3). Note that the main reason to introduce DOI in addition to EPC is because DOI is a free of charge ID space, which is present in several products and EPC, although it depends on the registration through the G1 office from each country and a contract with the EPCGlobal Services1, it is obvious that it needs to be considered in order to integrate the Internet of Things world defined under the RFID technology.

Integration of digcovery with other middleware and data-focused platforms such as Global Sensor Networks (GSN) from OpenIoT Finally, digcovery has been integrated with the Global Sensor Network (GSN) architecture, one of the most extended low-power embedded wireless network middleware architectures which is open and free.

1 EPCGlobal – GS1: http://www.gs1.org/epcglobal

10

D3.1 Open Service Layer Digcovery allows the exportation of resources and services directly to the GSN framework. This makes the use of the management of the services and resources located around the world for personal or industrial purpose a simple process. GSN offers the Application specific services to be built over this Open Service Layer and allows the collection of data from the sensors, managing statistics, raising alarms, etc. The integration in GSN into the resources of digcovery offers a flexible and scalable environment to discover, look-up, manage, and use the data from the Smart Things, EPCIS, Handle System (DOI) and legacy technologies (BACNET, KNX, X10 etc.). The integration of digcovery with GSN brings a very positive collaboration with the OpenIoT project.

11

D3.1 Open Service Layer

1 Introduction

1.1 Purpose and scope of the document The IoT6 research project aims at researching and exploiting the potential of IPv6 and IPv6- related technologies to develop an Open Service-oriented architecture overcoming the current Internet of Things fragmentation. This document is related to Task T3.1 on Open Service Layer design, which is defining and adopting the global IoT6 architecture defined in the D1.2 in order to provide the service layer and mechanism for discovery, look-up and integration of services from Smart Objects among the different platforms, clients and devices found around the world connected via IPv6. In addition, it will be shown how the proposed architecture allows for this indexing mechanism to be integrated with common existing ones, such as Digital Object Identifier (DOI) and Electronic Product Code (EPC) in order to offer a homogenous Open Service Layer through the IPv6-based technologies. Specifically, this document describes the design issues for the Open Service Layer proposed from the IoT6 point of view, in order to offer a global discovery and look-up platform based on IPv6 technologies such as DNS. This document will also describe the APIs and functions being offered by the Open Service Layer defined in order to provide functions for look-up and discovery, context-awareness and resource repository and access control solutions including privacy management. Finally, integration with non-IP technologies, i.e. legacy and proprietary technologies, as was done for IPv6 connectivity in Deliverable D2.1 is also described.

1.2 Key components of the architecture proposal This document presents the key components needed to build an IPv6-focused Open Service Layer. The main tasks for this Open Service Layer are to provide a global discovery / look-up solution with context awareness capabilities and security & privacy access control support.

Several elements are involved in building an Open Service Layer with the above mentioned capabilities. Figure 1 presents all the components considered to build our architecture proposal.

12

D3.1 Open Service Layer

Figure 1: Key components of the architecture proposal to enable the Open Service Layer

The green component presents the IoT6 Open Service Layer, whose purpose is mainly to build the interfaces with the client applications / users through Web Services such as RESTFul or through Enterprise communications interfaces such as JSON/XML or specific interfaces for third party platforms such as the presented example of Global Sensor Network (GSN) platform used in the OpenIoT EU FP7 Project. The dark blue components present the key components designed, proposed and developed (see D3.3) in this project to provide a homogenous an interoperable environment in order to discover, look-up and register services and resources. The main element is the digcovery, which is the global discovery platform. This platform is used to locate the different domains and the wide deployed directories with the different resources. The other elements are the directories containing the descriptions of the resources and services from each one of the domains. These directories are not technology dependent and therefore will be connected with any other platform through a driver. The considered platforms and drivers are for the platforms such as the EPC Information System for RFID tags presented in the IoT6 WP6, and the Handle System from CNIR for Digital Objects Identifiers (DOI). Finally, a Smart Object Discovery Protocol based on current IPv6-based discovery protocols is proposed in order to enable the interaction between IPv6-enabled devices and the directory from its domain. Specifically, a lightweight version of the Domain Name System (DNS) extensions for local discovery based on multicast or mDNS, and the DNS Service Discovery semantic to describe services and resources over DNS has been defined. A survey about the different local and global resource directories and service discovery mechanisms is described in Section 2. An elliptic approach for global discovery, called digcovery, and an adaptation of current protocols through drivers to a DNS-based approach (IPv6-enabled technology) for the directory and discovery has been considered at the end. The proposed Open Service Layer supports DNS in addition to the WebServices protocols and enterprise communications interfaces. For this reason, in order to provide the integration of protocols such as DNS, an elliptic approach versus a distributed solution i.e., Distributed Hash

13

D3.1 Open Service Layer Tables or centralized approach such as M2M platforms has been chosen. The distributed approach is not discarded and is also designed, described and analyzed, in Section 4. These elements from the digcovery/digrectory proposal and the Smart Objects discovery protocol are presented in detail in Sections 4 and 5 respectively. The black components involved in the proposal have not been developed or designed in the context of this project. They have been mainly analyzed and adapted to the requirements from the IoT6 Open Service Layer and the designed modules (dark blue ones). The first key component is the semantic description; this is very important in order to provide a powerful IoT6 Open Service Layer. For the semantic description, the work of other EC projects such as SPITFIRE has been taken into account and outputs from events such as the Interoperability PlugFest in conjunction with Probe-IT project, and standardization groups such as IPSO Alliance, ETSI and the recent released oneM2M. Section 6 presents the different existing approaches and those supported by the proposed Open Service Layer. The second key component is the Search Engine which is a key element of any powerful discovery solution. Digcovery has integrated ElasticSearch with some extensions based on geo-location, application profiles and domains for the context awareness look-up. Section 7 presents the Search Engine. The third key component is the management functions and communication interfaces needed to interoperate with third party platforms and solutions. CoAP is considered to be compatible with the current Internet of Things trends, SenML and JSON to be compliant with the IPSO Alliance and IETF approaches, and other enterprise interfaces such as RLUS for management. Finally, a port has been defined with the third party platform used in OpenIoT in order to extend and integrate the designed solution with the OpenIoT solution. Section 8 presents the communications interfaces with the different protocols and Section 9 presents the digcovery mechanisms which register, discover, look-up, access, and integrate with other type of objects such as RFID tags (in addition to the Smart Objects based on IPv6). The last key component and one of the most important is the security and privacy needed to provide access control mechanism to ensure the protection and privacy of the user’s data and resources. The analyzed solutions are based on access control lists and Section 10 analyzes the different access control list solutions considered for discovery and directory. Note that the access control is managed at the digrectory level.

14

D3.1 Open Service Layer

2 Literature review: Look-up/discovery, context awareness and resource repository solutions

This section is organized following the evolution of the initial requirements, i.e.: to connect Smart Objects, continue the requirements to build applications over them, define techniques to discover services, resources and define different kinds of directories. Initial work has been carried out in order to offer IPv6 connectivity to Smart Objects based on IEEE 802.15.4, Bluetooth Low Energy, BACNET, etc. This work is contextualized mainly under the 6LoWPAN [2], GLoWBAL IPv6 [3], and 6man [4, 5] for the integration of IPv6 even over MS/TP media [4]. Once the capability is available to connect end-to-end through the Internet to any Smart Object, it was considered necessary to define a homogenous access to the application layer. Analyzing the current Internet status, the Web is the most extended services medium and therefore, the Web of Things [6] was defined, in which, at the beginning, RESTFul packets (HTTP) were carried over 6LoWPAN. This solution was seen as highly flexible and having great potential. Then, a more reduced version of RESTFul for constrained environments, building the Constrained Application Protocol (CoAP) was defined. Once we have access to the sensors, and a set of services which can be easily and globally accessed, an easy and scalable way to discover these devices and their services is needed. Two different levels of discovery are found following an Internet of Things and Ubiquitous computing approach for the Discovery Systems and mechanisms [7]. The first level is resource discovery, i.e. the discovery of devices on the network; and the second level is the service discovery, the discovery of the services, methods and functions offered by a specific resource. Usually, the Internet is considered as a resource from a general point of view where services are part of these resources. However, when the Internet is not limited to just files, applications, and services, and is moved towards a more physical approach, the physical location and identification of the information is also required. Resources are reachable through technologies such as 6LoWPAN, GLoWBAL IPv6, Bluetooth Low Energy, or any technology offering IPv6 support. Resource discovery is the process by which the user is able to find devices offering services according to his criteria and interests. It can differ from the resources that the user can explicitly request or from a more sophisticated discovery where the network is more pro- active, and it notifies the user about the availability of these new devices. Resource discovery will provide descriptive information, such as the resource type or family, and some attributes to describe it. In addition, it will provide the information that the user will need to reach them, i.e.: a locator such as a URL, UID, Host Identity (HIP) or IP address. Resource discovery management requires dynamic updates to the system with the new resources included in the network, as well as the ability to integrate the updates over mobile [8] in order to be consistent with the real resources reachable at a specific moment. Service discovery is focused on the description of those services provided by technologies, such as those that are Web-based, i.e.: XML, Web Services, or other technologies such as JSON, and DNS Service Directory. These services include printing and file transfer, music sharing, servers for pictures, documents and other file sharing, as well as services provided by other resources. Simpler services can be considered with the expansion towards the Internet of Things such as the

15

D3.1 Open Service Layer environmental status consultation for temperature, humidity and lighting, a pressure value for a parking sensor, or the level of glucose. Different techniques can be found for resource and services discovery and the Internet of Things. Currently, the most common approach is the definition of M2M platforms, such as ThingWorx, Pachube, Sen.Se, and SENSEI, where the devices are registered in the platform, and are reachable from the Internet through WebServices such as SOAP and REST. The problem with this static approach is that it is limited to the information on the platform and the manually registered devices therefore, defining more scalable solutions is required. This makes it possible for resources entering the network to be available by registering with the discovery system, without any interaction with the user, on a directory system that is homogeneous and query-able simply over the Internet, without the need to use a specific M2M platform. This capability for autonomous registration and the discovery functionality to be dynamically adapted with the inclusion of new devices in the network is necessary for the Internet of Things to be flexible and ubiquitous. Scalability is also required in order to manage every resource connected to the network, whose number is continuously increasing. Solutions that require a manual and static management of resources, with fixed registration over specific directory systems from the M2M platforms can no longer be considered feasible. Some naming systems such as Lightweight Directory Access Protocol (LDAP), Universal Description, Discovery, and Integration (UDDI), and Domain Name System (DNS) [9] offer resource and service directory capabilities in which more specific resource discovery technologies could be added, such as UPnP, JINI, Service Location Protocol (SLP), and Rendezvous or Bonjour protocols over DNS with the DNS-SD extension and multicast DNS (mDNS) [10]. However, none of the existing implementations are taking into account the requirements from the IoT perspective such as the sleep mode, the constraints of computing power, battery capacity, available memory, or communications bandwidth that can be provided. The next section presents the design issues for IoT, followed by an analysis of the issues from the current initial solutions for IoT. Some optimizations and recommendations to make lightweight the mDNS and DNS-SD in order to reach a more suitable solution regarding the IoT constraints and requirements are then defined. For Smart Objects, approaches are either based on mDNS and DNS-SD, following the DNS protocol, or else based on CoAP interfaces. These directories have in common the location of resources and a description of their services.

2.1 Local Service/Resource Directory

2.1.1 DNS Service Directory and multicast DNS The most extended directory server in the current Internet is DNS, and an extension of the basic DNS was proposed by the IETF ZeroConf WG. Specifically, the DNS-SD [9] or Rendezvous protocol, is commonly used in conjunction with multicast DNS (mDNS) in solutions such as Bonjour for MAC OS and Avahi for Linux OS. DNS-SD and mDNS present a solution where no additional infrastructure is needed including the current DNS servers and merely requires that resources be enabled with an IP-based addressing. The solution is focused on the re-use and extension of existing Internet standards. This can be found within a multicast approach, with other protocols such as the first stage Service

16

D3.1 Open Service Layer Location Protocol (SLP) and JINI protocols through multicast, but the advantage of mDNS is the re-use and extension of the existing Internet protocols. In our approach, the focus on DNS-SD and mDNS is based on the objectives to enable all resources with IPv6 addresses, and the re-use and extension of current Internet technologies. mDNS is the main protocol used to query and populate the DNS-SD servers. This is widely used to provide Zero configuration host names (in .local domain). This offers a distributed service discovery that can allocate pointers inside (mDNS) and outside (DNS-SD) of a network. DNS-SD is scalable to enterprise deployments, since it can be defined as a centralized server per enterprise, building or in an IoT deployment to a room level. The results from a DNS-SD or mDNS query are essentially identical; the same clients can work for large or small networks. Specifically, at a host-level, management of the records with mDNS and with the infrastructure mode, i.e. DNS-SD, enables the repository to be updated with a modification in the delegated server, which has global consequences. The main problem is the cacheable DNS entries which can be resolved by defining a lower lifetime for them allowing them to be dynamically adaptable.

2.1.2 Resource Directory based on CoAP (RD) Following a similar approach to DNS-SD, a Resource Directory (RD) based on CoAP has been defined and is accessible through a CoAP-based interface which is not required to support additional protocols. This RD is used as a repository for Web Links in the resources hosted on the Smart Objects, which act as Web Servers through their REST/CoAP interfaces. These Smart Objects are also able to act as clients. The RD follows a functionality very similar to the DNS, but with CoAP instead of the DNS protocol and it is also segmented from the naming by domains and sub-domains. Differently from DNS, these entries are stateless and require the refresh from the Smart Object (in accordance with the maximum age configuration). In addition, there is a mechanism similar to DNS-SD which carries out the query, but in this case is based on the description of the parameters through the CoRE Link Format [11, Lynn2011]. Finally, the protocol [Shelby2011b] presents the functionalities to create, delete, and update the directory entries. The Resource Directory interface is explained in further details in the APIs and interfaces section.

2.1.3 Resource Directory to DNS-SD and mDNS mapping To provide DNS-SD and mDNS functionality through the Resource Directory interface as defined by CoRE Link Format [11], it is necessary to provide a unified interface to the service and resource discovery for all IoT devices and sensors in the network. In this way, all the IPv6 based devices will also be accessible from the service and application layer using the same Resource Directory interface and semantics. The mapping between the CoRE link format and DNS based service discovery is proposed in [12]. This proposal defines how the CoRE link attributes map onto DNS-SD records.

17

D3.1 Open Service Layer The main idea is to implement an additional module within the RD in order to perform this mapping in real-time. Furthermore, appropriate protocol adapters will be utilized to perform the service discovery using DNS-SD and mDNS protocol. Essentially, these protocols are the same, but subtle differences might exist in the DNS server discovery and domain mapping. In addition, additional TXT entries will be considered with extra information regarding a location for Geo-discovering, and extended information required by the discovery engine.

2.1.4 Example of Resource Directory to DNS-SD and mDNS mapping The purpose of the setup example is to provide a proof of concept for the service discovery using DNS-SD and mDNS within IPv6 network integrated with the service discovery based on the resource directory. The experiment setup based on the existing EcoBus test-bed is shown in Figure 2. As displayed, the IPv6 network will be setup containing DNS-SD server with two clients, acting as a large device cluster. There will be two more clients, acting as a small device cluster. The first phase of the experiment consists of testing IPv6 client from small device cluster service discovery using mDNS. For this purpose, two virtual machines are set as Lab1 and Lab2. Virtual machine Lab1 runs Avahi-discovery mDNS implementation, while on virtual machine Lab2 service is defined by mapping ecobus resource directory resource description to TXT portion of DNS record. Table 1 and Table 2 show the RD and associated DNS records respectively and the suggested mapping between the two.

Table 1: Example of ecobus resource directory record

urn:sensei:w3solutions.rs:android:1231 Komundroid 1001 12-12-21T23:00:00+00:00 Beograd BeogradPut 123123123 GET vraca link servera Beograd puta http://rest.w3solutions.rs/beogradput Table 2: Mapping ecobus resource directory record to VM Lab2 service DNS record

%h

_ecobus._tcp 22 Resource-ID=urn:sensei:w3solutions.rs:android:1231 Name=Komundroid Storage-ID=1001 Expiration-Time=12-12-21T23:00:00+00:00 Tag=Beograd Tag=BeogradPut RAI-ID=123123123 Description=GET returns URL of “Beograd put” server REP-Locator=http://rest.w3solutions.rs/beogradput

18

D3.1 Open Service Layer

Web clients Mobile clients

EcoBus test-bed IPv6 network

IPv6 device 1 in large cluster

Digcovery Digrectory Resource Directory server server IPv6 device 2 in large cluster

DNS-SD

IPv6 device 3 in small cluster DNS-SD and Bus sensors mDNS Protocol adapters

VM: LAB1 IPv6 device 1 in small cluster Avahi discovery mDNS

IPv6 device 2 in small cluster VM: LAB2

Figure 2: Experiment setup with highlighted tested elements

- VM Lab1 is running Avahi-discovery

- VM Lab2 has defined ecobus service

19

D3.1 Open Service Layer

Figure 3: Avahi-discovery screen on VM Lab1 showing service parameters

In Figure 3, a screenshot from VM Lab1 is given, showing the attributes of the discovered ecobus service running on VM Lab2.

For this purpose Avahi-discovery (www.avahi.org) mDNS implementation has been used. This is combined with Avahi4j (http://code.google.com/p/avahi4j) which is a simple Java API built on top of Avahi to browse and publish services. Further details of Avahi4J's main package containing the classes to interact with Avahi are available at their web site [13].

Registration of services offered by the IPv6 device from a small cluster with Avahi mDNS discovery is shown in the sequence diagram Figure 4. Avahi mDNS discovery is running mDNS look-up continuously. When a new service is found, a service description is sent from IPv6 client to Avahi database. Avahi4j API is running service browsing continuously and when the service is resolved, Avahi4j collects the service description information. For registration of a new service, Avahi4j sends a request to RD for new storage ID where a new service description should be mapped. RD replies by sending a storage ID, and a service description is mapped to RD storage ID URL.

20

D3.1 Open Service Layer

IPv6 device Avahi4j mDNS Avahi Application RD from small API discovery cluster

Avahi lookup mDNS

Regristration - client sends service destription

Service browsing

Service resolving

Request for RD storage ID

Sending new RD storage ID

Sevice description sent to RD storage ID URL

Figure 4: Registration of devices with mDNS Avahi

Resource and service discovery showing RD to mDNS Avahi look-up is given in the sequence diagram in Figure 5. If an application that browses available services is interested in a particular service offered by an IPv6 device, it receives the resource storage ID from RD. The application retrieves the service description from the storage ID URL, and finds an address from which it can retrieve the service results. In Figure 5, the procedure for service description update is also explained. Avahi mDNS discovery is running look-up mDNS continuously and refreshes the service description in its database. Avahi4j API is running service browsing continuously and monitoring changes in the service description. In case of changes in the service description, Avahi4j sends a new service description and maps a new record to the RD storage ID URL.

21

D3.1 Open Service Layer

IPv6 device Avahi4j mDNS Avahi Application RD from small API discovery cluster

Avahi lookup mDNS

Client sends service destription

Service browsing Monitoring changes in service Service resolving description

Service browsing Monitoring Service resolving (RD changes in sends service storage ID) service description Application retreives Update of service service description description sent to RD storage ID url

Application retrieves service results

Figure 5: Resource and service discovery showing RD to mDNS Avahi look-up and service description update

The next steps of the experimental setup would involve testing the IPv6 clients within the large device cluster service discovery using DNS-SD (Digcovery and Digrectory solution). This experiment is set up in a similar virtual environment with virtual machines running Digcovery DNS-SD solution. Three virtual machines are needed: Digcovery server, Digrectory server and IPv6 client.

Methods that allow web and mobile clients to obtain service results (measurement values) from IPv6 devices discovered by mDNS and DNS/SD should also be examined. Finally, aggregation of resource directory is possible for all resource types.

Upon final development of the testbed, two different use cases will be tried:  WebClients access through the digcovery platform (i.e. IPv6 network) instead of the ecobus.  Mobile clients make both a local discovery interaction with the digrectory from the ecobus and a global access through the IPv6 network.

2.1.5 DNS-SD/mDNS and CoAP Resource Directory/Discovery main differences The main differences between mDNS and DNS Service Discovery (DNS-SD) with respect to CoAP Resource Directory is that mDNS and DNS-SD are re-using / extending the DNS protocol to carry out the Resource Directory and service discovery functionalities, while the CoAP Resource Directory is re-using and extending the CoAP functionalities to build the

22

D3.1 Open Service Layer Resource Directory and Service Discovery. The main advantage of CoAP Resource Discovery for Smart Objects is that it only requires a common stack (CoAP / 6LoWPAN / IEEE 802.15.4) for the different application functions and also for discovery. The main disadvantage is that it is totally out-of-band with respect to the IPv6-based protocols such as DNS. For that reason, we are proposing a lightweight version of mDNS in Section 5. CoAP Resource Directory/Discovery is appropriate for small networks, i.e. intra-domain solutions where the applications are running within the same LoWPAN or LAN. Therefore, to define a not-so-wide solution is not a problem. However, in the case of more extended solutions, i.e. global accessible or discoverable devices, this approach needs to be yet re- designed. For this purpose we could consider that DNS-SD is more appropriate for large networks, as it offers a centralised common service discovery mechanism. It is important to note that DNS-SD usage can range from discovering a laptop (Bonjour in Apple devices), a printer connected to the network (Brother solutions) or to discover a temperature sensor (new Building Automation options from the IETF WG and the draft [14]). There are already some building automation solutions such as BACnetIT WG that are also considering DNS-SD/mDNS for service discovery depending on the network [5].

2.2 Global look-up and discovery systems The global look-up and discovery mechanisms are important for IoT to find resources and devices that live outside the local network. They are mainly used by entities residing in a domain to search, understand, and contact entities living in other domains. In [15] we can find a survey of different discovery mechanisms, separated by their scope in local and global discovery, and also by their underlying technology. In this section a brief introduction of such mechanisms that would fit in the IoT scenarios as well as our own approach that integrates both local and global scopes to ease and homogenize the discovery task are given.

2.2.1 Simple look-up/resolution systems As discussed above, many types of discovery approaches, both for local and global network search can be found. Moreover, we have many different approaches for global discovery, some of them based on simple resolution from an identifier (name or key) to a location or other more specific identifier. In this subsection, some outstanding approaches for simple resolutions which can be globally used for resource or device discovery in the IoT environments are introduced.

2.2.1.1 Domain Name System (DNS) The Domain Name System (DNS) [16, 17] is a hierarchical distributed naming system for computers, services, or any resource connected to the Internet. It associates and resolves domain names to addresses or other data. Thus, a DNS server resolves queries for these names into IP addresses for the purpose of locating computer services and devices worldwide. By providing a worldwide, distributed keyword-based redirection service, the DNS is an essential functionality component of the current Internet. Even though the DNS is quite old, it is still used by many new discovery mechanisms as in the previously described mDNS/DNS-SD. This is because it is a widely-proven infrastructure and has demonstrated a high degree of robustness and reliability. However, this does not mean it is bulletproof and it is widely known so have weaknesses and design flaws. In the global search and discovery operations, DNS acts as a hierarchical resolver. A client

23

D3.1 Open Service Layer needs to point to a specific DNS server, which usually is on its network or is the closest server to it. Then as in normal operations, the client sends a query to its configured DNS server in order to perform a resolution. DNS tries to resolve the query from its authoritative entries or from its cache, if it fails, it asks for the server that handles the top-most part of the domain name, and asks it to resolve the domain name or for the server that resolves the next part of the domain name. This process ends when an answer is received or the authoritative server for the domain name specified in the query is reached. The operation of the DNS is very simple but it is able to provide more complex resolutions by using some of its extensions, like DNS-SD [10].

2.2.1.2 Handle System Integration as an Enabler in an Internet of Things Smart Environment Although DNS is widely used to resolve names to addresses and can also be used for general resolutions from a key to a value, it does not provide a consistent and easily secured mechanism to persistently represent object identifiers that can be resolved to the objects. Also, other mechanisms based on URLs depend on the location of web servers since the identifiers are bound to them. To overcome these limitations, the Handle System [18], provides efficient, extensible, and secure resolution services for unique and persistent identifiers of Digital Objects. The Handle System provides a means of managing digital information in a network environment and is a component of the Digital Object Architecture (DOA) [19] proposed by the Corporation for National Research Initiatives (CNRI). A Digital Object has a machine and platform independent structure that allows it to be identified, accessed and protected, as appropriate. A Digital Object may incorporate not only informational elements, i.e., a digitized version of a paper, movie or sound recording, but also the unique identifier of the Digital Object and other metadata about it. The metadata may include restrictions on access to Digital Objects, notices of ownership, and identifiers for licensing agreements, if appropriate. The Handle System includes an open set of protocols, a namespace, and a reference implementation of the protocols. The protocols enable a distributed computer system to store identifiers, known as handles, of arbitrary resources and resolve those handles into the information necessary to locate, access, contact, authenticate, or otherwise make use of the resources. This information can be changed as needed to reflect the current state of the identified resource without changing its identifier, thus allowing the name of the item to persist over changes of location and other related state information. Some examples of applications that use HDL identifier and resolution services as infrastructure are rights management applications, Digital Object registries and repositories, and institutional data preservation and archiving. Starting from the Handle System, in [20] we can find a proposal that uses it as a mechanism to create globally unique identifiers with meta-information for the IoT in a globally connected world. The use of the Handle System in this architecture reduces its complexity, in terms of the overall smart environment IoT architecture. It ensures that common information used by various components is stored centrally. Thus, when changes are required, only one component needs to be updated. The architecture is based on the following components:  BeachComber: a bearer-agnostic event-driven platform able to receive and send messages via numerous channels [21].  ThingMemory: an enterprise-scale application where a cyber-model (and associated

24

D3.1 Open Service Layer status) of the physical world is represented [22].  Mock Smart Environment: a newly-created building block consisting of an Arduino open- hardware platform linked to a number of different sensors and actuators.  HandleProxy: a newly-created building block acting as proxy to the Handle System.  Runtime configurable decision engine (embedded within ThingMemory) that, based on received information from BeachComber, calculates the appropriate control actions for BeachComber to execute within the smart environment. Thus, the proposed scheme stores the command-set of a thing in its own handle. This allows to communicate actions (e.g. the capability and status). Even though the scheme is unsophisticated, it has proved to be valuable. This scheme can still be revisited and refined further. It is clear that using handles for small devices (things) adds value so this concept is explored in our research. This way, although the experimental smart environment presented in [20] embodies a limited number of things, we will investigate its feasibility in wide environments with many devices involved.

2.2.2 Overlay Network and Distributed Hash Table (DHT) Overlay networks are infrastructures built on top of other (underlying) network infrastructures, typically the current Internet, to provide a totally different network structure and behaviour. They have their own routing algorithms, which are used to locate the destination node and also to deliver packets/messages from one to another intermediate node towards the destination. Each node is typically identified by its own identifier, which is not an address like in IP. However, each routing algorithm may impose certain constraints to the identifiers, such as being sensitive to some distance function to measure the distance between each pair of nodes. The Distributed Hash Table (DHT) is, as its name suggests, a hash table that is distributed across a group of network nodes. DHTs are typically built on top of overlay networks because they fit in the routing and node location requirements of the typical workload. DHT based systems allows the finding of a value by giving the key associated to that value, for example to find a location of a file given its filename [23]. The systems that implement this principle are fully decentralized, scalable, achieve load-balancing, provide certain level of fault tolerance, and most of them have theoretically provable correctness and performance. The main drawback of such systems is their inability to support complex queries, i.e. they support only exact-match search, although there are some approaches that support such capability through an ontology-based approach (as discussed in Section 2.2.3). Also, the approaches found in [24, 25] overcome this problem by introducing extra complexity to the system. The simplest DHT operates as follows: A user wants to publish a value in the network. Some identifier for that value is hashed to produce a unique key. Then, the specific routing algorithm and look-up function of the DHT is used to find the node that will host the key/value mapping (or, in some approaches, the closest node to it). When the appropriate next-hop node is found, this key/value pair is propagated in the DHT overlay network to reach it. If another user want to obtain the value, a similar procedure is initiated: the user obtains the key by hashing the identifier of the value; then it uses the routing algorithm to obtain the node (or the closest node) that has the mapping; finally, it requests the value to the appropriate next-hop of the overlay network and it sends back the value. The typical length of a key is 160 bits (the base, B), which is the length of the hash produced by the SHA1 hashing algorithm. Typical DHTs and overlay networks use this algorithm

25

D3.1 Open Service Layer together with the assignment of more or less random identifiers to the nodes in the overlay, in order to spread the key/value mappings among the nodes. Moreover, most routing algorithms used in overlay networks to construct the DHT infrastructures are designed to guarantee an upper bound number of hops that a message will need to reach the destination node. Hop advancement is large in the beginning of the routing and decreases with each hop traveled. Since the form to store/retrieve a key/value mapping is similar for all approaches, they differ on how the overlay network is built and how the routing algorithm used to build it. Below we discuss the particularities of the main overlay network and DHT approaches.

2.2.2.1 Chord The nodes in Chord [26] are identified by a hash of 160 bits, typically obtained from the IP address of the node (although it is not a requirement), and sorted in a circle (ring topology). The packets are always sent in one direction (clockwise) via the ring until they arrive at the destination node. Thus, each node in Chord needs to know only its successor. However, to speed up the delivery process, each node stores log2(N) items in its routing table (fingers). The first finger is a pointer to the node with a distance equal to 20 (=1), which is the next node (successor). The second finger is a pointer to a node with distance equal to 21 (=2), the successor of the successor. The third finger points to a node with distance equal to 22 (=4), and successively, the last finger points to a node with distance equal to 2159, which corresponds to a node that is a half-ring away. Each node is better acquainted with its nearest neighbourhood than distant nodes. The key/value pairs are stored at the first node, whose identifier is equal to or greater to the key, measured with the same distance function used in the routing. Chord uses consistent hashing [27] to ensure an even spread of keys among the nodes. To make the system fault tolerant, each node also has a pointer to its predecessor. The routing algorithm runs a stabilization protocol that periodically checks for consistency of immediate successor and predecessor pointers. The look-up procedure finds the finger that is closest to the desired key and sends to that node pointed by that finger a get_value message with the key. This node will in turn route the message to the node from its finger table with the closest identifier to the key. This process ends when a node that receives the message does not know any other node that is closest to the specified key. However, if the node has the key/value entry, it will send it back to the requester node. This procedure requires, at most, log2(N) messages and, as discussed above, the routing table managed by each node consisting of the same amount of entries. Thus, when a node joins or leaves the overlay network, it results in log2(N) message exchanges. There exist a number of other systems employing a similar ring topology. For example, Dabek et al. [28] have studied a variety of optimization techniques for DHTs, such as replication strategies, erasure coding, server selection, iterative and recursive routing, proximity routing, and neighborhood selection. DHash++ is a result of this study. Hybrid-Chord by Flocchini et al. [29] enhances the Chord performance and robustness by introducing some redundancy in the system via laying multiple chord rings on top of each other and using multiple successor lists of constant size. Chord-based DNS [30] and Cooperative mirroring/Cooperative File System [31] are other examples of the systems that use Chord.

26

D3.1 Open Service Layer

2.2.2.2 Tapestry Instead of a circular (ring) topology, as used by Chord, Tapestry [32] uses a tree-based topology and routing algorithm. It is based on the work of Plaxton et al. [33]. It uses correlation between a node identifier and a key to route a message. The prefix in the node identifier in the routing table entry is compared with the prefix of the key. If they share a common prefix, at least in more than one digit compared to the current node identifier, then the message is forwarded to the considered node.

For a tree like search in Tapestry, each node needs to maintain logB(N) entries, where B is typically equal to 4. This guarantees the delivery of a packet in O(logB(N)) hops. Moreover, after a join or leave, the consistency of the system is restored in logB(N) messages. Derived from Tapestry, there exist other DHT-based systems that use a tree topology and routing algorithm. For example, Aberer et al. [34] propose a self-organizing system called P- Grid [24] that is able to adapt to a changing distribution of data keys over the nodes. The system is further used in the GridVine [24] semantic overlay.

2.2.2.3 Pastry In contrast with Chord and Tapestry, the routing space in Pastry [35] is organized in a hybrid manner [36]. The search for a key/value pair starts by using a tree method and, when the destination node is close, it changes to a ring method. Moreover, the ring approach is used when the tree-based routing fails. To maintain such hybrid topology, a node has to host three types of tables that assist the routing process. The size of a routing table used for tree routing in Pastry is approximately [logB(N) * (B – 1)] entries. The leaf table that is used for the ring routing contains about 2 * B entries. It contains half of the numerically close larger node identifiers and half of smaller node identifiers. Pastry will not fail until either half of the leaf table nodes fail simultaneously. Also, each node maintains a neighbourhood table. It has 2 * 2B entries and is used for locality routing, to route packets via physically proximate nodes.

In summary, Pastry has the following characteristics: the routing is done in O(logB(N)) messages or hops, the number of routing table entries per node is [2 * B * logB(N)], and the number of additional messages required for a node to join or to leave the network is [logB(N)].

2.2.2.4 Kademlia Kademlia [37] is a decentralized overlay network routing mechanism that uses the XOR metric to measure the distance between nodes. As in other approaches, the key/value pairs are stored on some of the nodes whose identifiers are close to the keys in terms of XOR metric. Its operation is similar to Chord but it uses XOR instead of sorting the nodes in a ring and it also stores the values in many nodes (“m”) to keep the values for a time when suffering high rate of joins/leaves. To locate a value, the routing algorithm of Kademlia uses the same XOR metric to estimate the distance between the specified key and the identifier of a node. The requester node then sends the query to those “m” nodes from its routing table that are closest to the desired key. The look-up process stops when the requested key/value pair is retrieved. Additionally, this key/value pair is cached at the closest node to the key. The caching makes sense for Kademlia as it uses a unidirectional metric, which ensures that all look-ups for the same key converge to the same path irrespective of the peer who issues the request. The caching, thus, allows the alleviation of hot spots along the look-up path. The routing table of each peer has the node identifier, UDP port, and IP address of the other

27

D3.1 Open Service Layer peers located at the distance from 2i to 2(i+1) from itself (0 < i < 160). For the routing, it maintains special lists, k-buckets, where k is a system wide number, for example 20. Each k- bucket stores a list of nodes that are situated at the particular range of distances from the considered node. The distance is obtained by XOR operation conducted on the node identifiers, so different nodes are placed in one bucket if their distances from the source node correspond to the highest bit. For example the nodes that have distance metrics of ‘‘110” and ‘‘101” will be put in one bucket; the nodes with metrics ‘‘110” and ‘‘011” belong to different lists. Kademlia upgrades the k-buckets when it receives messages from other nodes. This process is optimized to keep the longest living nodes always in the routing table, as it has been shown that the longer the node stays in the network the less probable it is that it will fail in the future [38].

Kademlia requires [logB(N) + c] messages for the look-up process, where c is a small constant. The routing table size is [B * logB(N) + B] and the number of update messages for nodes joins and leaves is [logB(N) + c]. The main difference with other overlay routing algorithms is that Kademlia uses iterative look-ups instead of recursive. This is important because a node does not need to trust in the behaviour of other nodes, only in the responses they give, so it is protected against a silent rejection to cooperate from some nodes.

2.2.2.5 CAN The routing in the Content Addressable Network (CAN) [39] is done in a virtual multi- dimensional Cartesian coordinate space on a multi-torus, although this structure is completely logical. Each node is assigned a unique area in the coordinate space. The coordinates of this area identify the node and are used in the routing. As in Chord, CAN uses greedy routing strategy where a message is routed to the neighbour of the current node that is situated closer to the required location. Therefore, for an effective routing, a node needs to know the coordinates of its neighbours and their corresponding IP addresses. Such routing strategy requires continuous coordinate space without ‘‘empty”, unassigned spaces. Thus, when a node joins or leaves the overlay network, the space needs to be dynamically reallocated, so that there are no ‘‘empty” spaces and each node has certain zone to control. A new zone is usually obtained by splitting a zone of some random node into two parts. The absence of free zones is ensured by enlarging the zones of the nodes whose neighbour just left the network. The allocation of the key/value pairs is also done by using the same coordinate space. The keys are mapped uniformly on the multi-torus using a hash function and each node hosts the values that were assigned to its zone. The CAN algorithm is highly scalable, fault-tolerant, and self-organizing. However, in contrast with previously discussed algorithms, its look-up routing requires, at most, [d * N(1/d)] messages, where “d” is the number of dimensions in CAN, and the routing table size does not exceed [2 * d] entries, which also is the number of messages needed to stabilize the DHT after a node joins or leaves the network. Data replication rate can be increased in CAN by introducing a number of parallel coordinate spaces called realities, so each node may participate in a number of realities, or by using several hash functions on the same space to obtain multiple coordinates for the same key. These strategies allow to increase the reliability of the system and reduce the average search query latency at the price of the systems complexity and resource consumption. Due to its scalability, CAN is not only used in traditional peer-to-peer applications, but also in large storage management systems, like Farsite [40] or OceanStore [41].

28

D3.1 Open Service Layer 2.2.3 Ontology-driven Semantic Systems Apart from the simple resolution systems and the highly distributed systems (DHTs), we found another tendency to define a discovery mechanism based on semantically linked data and thus driven by ontologies and vocabularies. Such systems provide a powerful mechanism to store and query complex information and are typically based on RDF and SPARQL for the task. The Resource Description Framework (RDF) [42] is a family of W3C specifications that are used as a general method for conceptual definition of modeling of information that is typically implemented in web resources through a variety of syntax formats. The basement of RDF consists of the relation between a subject, a predicate, and an object. This is called “triple”. Therefore, in order to define a resource, we need to define as many triples as necessary, using the resource as subject, the metadata type as predicate, and the metadata value as object. On the other hand, SPARQL [43] is currently the most common query language applied to RDF data sources. It is able to retrieve and manipulate data stored in RDF in terms of triple patterns, conjunctions, disjunctions, and optional patterns. Its syntax is similar to SQL, and SPARQL queries may be translated to SQL, but it is specifically defined for the RDF information structure. The most sophisticated and semantically richest language version in this family is the Web Ontology Language (OWL) [44]. OWL is based on RDF and RDFS (RDF-Schema) and was designed to realize complex domain descriptions that are based on a common logic. As such, the OWL language enables the profound and formal description of domain knowledge. The OWL language comes in different forms with OWL based on Description Logics (DL) [45] being the most popular manifestation. The limitation of OWL to DL restricts it to a decidable fragment of first-order logic and as such facilitates logical reasoning over explicitly modeled facts with the help of dedicated DL-reasoning software, for instance Pellet [46]. This mechanism on one side allows to determine the consistency of the represented information while, on the other side, the inference of new facts in the OWL ontology becomes possible. These frameworks and systems emerged from the Semantic Web and Linked Data initiatives, but the concepts can also be applied to global resource and service discovery. Below we describe some examples.

2.2.4 Hybrid Systems In the above subsections, the simple discovery mechanisms, the mechanisms based on overlay networks and DHTs, and the ontology-driven semantic discovery have been discussed. In this section, the hybrid systems that are ontology-driven on top of DHTs are discussed.

2.2.4.1 A P2P RDF repository for distributed metadata management From the perspective of using a DHT for storing and searching for RDF information that follows no specific ontology, we highlight RDFPeers [47]. It proposes a scalable peer-to-peer RDF repository which stores each RDF triple in a multi-attribute addressable network by applying globally known hash functions. Queries can be efficiently routed to the nodes that store matching triples. It also enables users to selectively subscribe to RDF content. In RDF Peers, as with other DHT implementations, the cost of most operations grows in a logarithmic form with the increase of the network size, but it is independent of the number of entries stored in the DHT. This includes both the number of neighbours per node, the routing hops for triple insertion, the query resolutions, and triple subscriptions.

29

D3.1 Open Service Layer The overlay network used to build RDFPeers infrastructure is based on Chord [26] and is named MAAN [48]. Its main improvement is to efficiently answer multi-attribute and range queries. However, MAAN only supported predetermined attribute schemata with a fixed number of attributes. RDFPeers exploits MAAN as the underlying network layer and extends it with specific storage, retrieval, subscription, and load balancing techniques for RDF. Although many other discovery approaches use SPARQL as its query language, RDFPeers proposes to use RDQL [49], which is the previous “standard” query language for RDF that has been superseded by SPARQL. RDFPeers includes a RDQL-to-native query translator, which is used to obtain the necessary structures to query the DHT from the specified query. As commented above, RDFPeers stores each RDF triple in three different nodes of the DHT, one for each part of the triple. Thus, the queries are performed by requesting the triples stored on the nodes for the specified subject, predicate, or object. A query may specify only some parts of the triples, but leaving out some of them. This way, the query will last longer, but the search is wider. In the end, depending on how specific is the query, it is classified in different levels of cost, i.e. the most general queries resolved in O(N) and the other more specific in O(log(N)), where N is the number of nodes in the overlay network.

2.2.4.2 An Ontology-based Hierarchical P2P Global Service Discovery System Being more deeply tied to ontologies but also being a P2P approach, in [50] we find an architecture for global service discovery that uses ontologies to create logically constructed service classes, while automatically generating an artificially intelligent hierarchical P2P. Its target environment is the Wide Area but is also well suited for Local Area Networks. Any type of service can be defined within this architecture using the OWL ontology. It includes, but is not limited to, event-based services, physical location-based services, and communication, e- commerce, or web services. As in Pastry [35], this approach combines both hierarchical and overlay networks. On the one hand, the hierarchical network is formed by connecting the nodes between the high-level, disjoint services within the service classification while, on the other hand, the overlay network and its routing algorithms are based on CAN [39]. This architecture is shaped by service ontologies that are created by ontology engineers. The ontology engineers are experts of the particular service class and are aware of how best to classify and define properties of that service. However, as users begin to query and register for services, the ontologies may evolve to reflect the social perceptions of a particular concept. Additionally, the queries issued are analyzed to see how the network can dynamically reshape itself to produce the results with greater speed and accuracy of specific queries. The search operation in this architecture works as follows: first, a client queries the infrastructure for the server that manages the type of the service it wants to find. This is performed automatically by using the ontologies to match different search terms with the corresponding server. This server forms part of the overlay network built with CAN, and then the client search will be disseminated through the overlay network to find the description of the desired service. The CAN overlay network is structured by using the classes defined in the ontology, using a different dimension for each group of classes that has a parent in common, while keeping balanced the number of dimensions used with the number of defined classes. Thus, as specified in CAN, the search operation will issue a parallel query in the DHT for each combination of unspecified property values of the searched resource or service, so the more properties are specified, the fewer requests are sent to the overlay network.

30

D3.1 Open Service Layer 2.2.4.3 Ontology-based Service Discovery in P2P Networks Based also on ontologies and built on top of a peer-to-peer network, in [25] we find a proposal to enhance JXTA [51] by introducing semantic models of services using the Web Ontology Language (OWL) [44]. JXTA is an open-source P2P protocol specification launched by Sun Microsystems in 2001. Its protocol is defined as a set of XML messages which allow any device connected to a network to exchange messages and collaborate independently of the underlying network topology. Peers create a virtual overlay network which allows them to interact with each other even when some of them are behind firewalls and NATs or even use different network transport technologies. In addition, each resource is identified by a unique identifier, a 160 bit SHA1 URN in the Java binding, so that a peer can change its localization address while keeping a constant identification number. Its operation is very similar to the operation of Tapestry [32], described above. The architecture defines two layers, the low-level communications infrastructure based on the approach proposed by JXTA, and the high-level ontologies and reasoning layer. These two layers are mostly independent of each other, so the ontologies may be changed without changing how they are integrated with the overlay network, and this could be replaced and keep using the same ontologies on its top. The middle communication performed on top of the overlay network is also kept independent of the other technologies by using SOAP [52]. The service discovery mechanism works as follows: first, the peers retrieve advertisements from other peers of the service type (e.g. “Device:Printer:*”). Second, the peers retrieve the OWL files, indicated by the advertisements, which describe the services provided by these peers and call a getFile SOAP method that all service-providing peers must implement. Third, the peers load the OWL descriptions into an inference engine and evaluate the data to decide which service/s to use. Finally, the peers extract the WSDL interface description of the desired service from the service’s advertisement, and call the SOAP methods described there in order to invoke the service.

31

D3.1 Open Service Layer

3 Design Issues and Requirements from the Internet of Things & IoT6 Architecture

The fast evolution of the IoT is defining new challenges in term of scalability, allocation of resources and efficient discovery. Therefore, to define an efficient method which can be simple but yet sufficient, and satisfy the basic Smart Objects requirements about low-cost, lightweight, and efficiency is required. It needs to be determined among the extended set of existing solutions for the resource and service discovery, which is the proper solution for the requirements and constrains from the Smart Objects. For that reason, this section presents the major challenges and design issues:

3.1 Scalability It is estimated that over 50 billion devices will be connected to Internet by 2020 [53]. This implies that a high number of resources, services, and locators (IPv6 addresses), will need to be managed. Therefore, a decentralized architecture is required, such as that defined in the DNS, which distributes the information about the services and location of the deployed Smart Objects based on their domain or anchor point. Thereby, the information can be managed locally, but be accessible globally, through the Internet architecture.

3.2 Dynamic Smart Objects are being deployed continuously; therefore new devices and services will be continually defined. In addition, some Smart Objects will be mobile (wearable systems, Intelligent Transport Systems, etc.). Therefore, a solution which can be easily and dynamically updated, in order to manage the creation, update and delete of entries about Smart Objects services and location is required.

3.3 Sleep mode awareness The first constraint is that the Smart Objects are usually battery powered. Thus, a sleep capability is needed, in order to optimize their battery lifetimes. This will limit and define new challenges for the solutions based on approaches where the endpoint is directly queried, such as the multicast DNS solution.

3.4 Payload and frame size constraints The original frame size from technologies such as IEEE 802.15.4 is equal to 127 bytes. 6LoWPAN has an overload of 26-41 bytes, meaning that the final available payload is reduced to about half of the original size, i.e. 61 to 76 bytes from the original 127 bytes.. Therefore, it will require a high filtering of the answers, so as not to overload and flood the nodes with answers which require multiple packets (fragmentation).

32

D3.1 Open Service Layer 3.5 Global access and query capabilities A Resource Directory needs to be defined, which can be queried at a local (specific domain), and global level in order to carry out wide surveys. Therefore, a mechanism able to offer queries in a domain-specific level is required and required to be extended with another mechanisms, in order to discover the domains in which the type of resources or devices queried are available.

3.6 Multi device operations A mechanism which can combine multiple devices into one device for operations which involve multiple devices needs to be supported. In the example, “turn on all the lights in this room”, getting an answer from each bulb in a room and sending a message to each of them is not feasible due to the payload size constraints previously mentioned. Therefore, multicast needs to be exploited as well as support for multicast in the directory.

3.7 Based on existing Internet technologies The access to the directory should be based on already existing mechanisms but with some considerations in order to reach a trade-off among all the presented challenges. Therefore, it could be based on DNS or some of its derivations, or it could be built over the application level, i.e. CoAP.

3.8 Semantic description A common description of the services, and the attributes required to carry out the queries, will be defined. It is a collateral requirement to define the mechanisms to filter adequately the type of resources and services to be queried. Section 6 presents in depth the existing semantic descriptions for Smart Objects, in addition to the related works in this area from European Projects such as SENSEI/Hobnet, SPITFIRE, and OpenIoT.

33

D3.1 Open Service Layer

4 Open Service Architecture Proposal: Global Resource Directory and Service Discovery

4.1 Overview This section presents an Open Service Architecture for the Internet of Things, which is focused on offering a Global Resource Directory and Service Discovery mechanism based on IPv6 technologies. This Open Service Architecture is built over the IoT6 architecture presented in the architecture design document [54]. The adopted IoT6 architecture is based on the FI-WARE and ETSI M2M model as outlined in the architecture design document [55]. FI-WARE defines so called “Generic Enablers” (GEs) for the Internet of Things Service Enablement. The ideas of FI-WARE are close to the ones from IoT6, in that both examine how Smart Objects, or commonly called “things”, can become integrated into the Future Internet architecture based on IPv6.

Smart objects should be discoverable, accessible, available, usable, and interoperable through IPv6 technologies

The Internet of Things, as an integral part of the Future Internet, will offer an additional value to real-world applications such as Smart Cities and Building Automation solutions. The proposed architecture addresses the combination of IPv6-based systems within a Service Oriented Architecture, in order to integrate heterogeneous subsystems of the IoT. This architecture takes into account the necessary modifications in order to accommodate the required functionality providing a unifying framework over the current heterogeneity and fragmentation of the IoT. This section is focused on: the service discovery mechanisms in relation to the IPv6 based devices; the appropriate mapping to the Resource Directory from CoAP [12]; the proposed and presented Global Resource Directory based on the DNS-SD and mDNS [10] the lightweight version for Smart Objects (lmDNS); and mechanisms that extend the basic DNS efficiently leveraging the existing infrastructure. . 4.2 General Architecture Figure 6 shows the architecture overview of the service discovery infrastructure. It comprises components from the FI-WARE architectural model [56] as well as service discovery components such as Resource Directory (digrectory), specific protocol adapters accommodating DNS-SD, mDNS, CoAP Resource Directory, and its own IoT6 stack API (discovery API) support. There are distinct types of device (sensor) clusters, namely ETSI M2M clusters, large IPv6 clusters, small IPv6 clusters, RFID clusters, and other clusters (IPv4, proprietary, legacy technologies, etc.). As for the ETSI M2M clusters and others, the existing service discovery mechanism supports CoRE Resource Directory [55]. It is based on DNS-SD and mDNS with appropriate protocol adapters providing the full set of required functionality. The focus of the subsequent analysis related to the service discovery is on the IPv6 sensor clusters using the DNS-SD and mDNS methodology. The difference between the two types of IPv6 clusters is their size in terms of the number of sensors and efficiency of multicast traffic within the particular segment of the network. Where

34

D3.1 Open Service Layer there is a small number of sensors and with the efficient multicast infrastructure, it is possible to implement a direct service discovery using only mDNS (or lmDNS) mechanism. However, within the large sensor networks and where the multicast traffic is considered to be inefficient, it is better to employ the DNS-SD methodology, where additional DNS server is placed within the network, serving as a local database with resource directories and record structured as per DNS-SD convention [10].

Figure 6: Architecture overview of the service discovery infrastructure

35

D3.1 Open Service Layer

4.3 Components description The components of the resource and service discovery are: the Global Resource and Service Directory, digcovery, located in the backend, the Local Resource Directory as a component of the middleware deployed for each router or multiprotocol card, and the Service Discovery at the sensor level, which will be available only in Smart Objects, such as IP-based WSNs. In the case of legacy technologies the Service Discovery will be mapped through the techniques described in Deliverable D4.1. The role of each one of the components is a bottom-up approach:

4.3.1 Smart Object discovery protocol The Smart Object discovery protocol is located at the sensor and actuator level and is responsible for managing the Service Discovery for its own applications from the sensors/actuators in the client role of the discovery platform. The Smart Object discovery protocol is also responsible for replying to the queries from the Local Resource Directory. Therefore, this presents a double role as client and sensor-level server, commonly called announcer. In the case of legacy technologies such as BACnet or any other kind of sensor not receiving this Sensor-level intelligence, this functionality is delegated to the gateway, router or panel, i.e. managed directly in the Local Resource Directory. The most extended protocols for this purpose are DNS/mDNS/xmDNS and CoAP Resource Discovery and were described in Section 3.

4.3.2 Local Resource Directory The Local Resource Directory is located in the gateway, router or multiprotocol card level and manages at a local level (i.e. room or flat level) the resources from the sensors connected into the network that it is managing/controlling. The Local Resource Directory interoperates with the clients in order to provide information about the available resources and also manages the access control policies (such as node control list). The Local Resource Directory allocates a search engine and adapts the services to a common semantic description in order to make them interoperable and understandable for any client.

4.3.3 Global Resource and Service Directory This can be carried out through a centralized approach such as the presented in the Figure 7, where it depends on a server which will know to all the services, or it can be based on a distributed approach, see the Figure 8. Both approaches present a set of advantages and disadvantages. For that reason the proposed solution for digcovery is a new paradigm called Elliptic approach which collects the advantages of centralized approach in terms of easily accessible and well-known access point, and also the advantages from distributed approach in terms of scalability.

36

D3.1 Open Service Layer

Figure 7: Central approach, such as the used for CoAP Resource Directory

4.3.3.1 Distributed approach As presented in [15] and discussed in previous sections, there exist many discovery mechanisms that are targeted to local network discovery and for discovery in the global network, the Internet. However, it is difficult to find a mechanism that could cover both local and global scopes. This mechanism should merge capabilities from the two different families of discovery mechanisms. In this subsection, we discuss a proposal to integrate a proper mechanism for local discovery scope and a robust and scalable mechanism for global discovery. This way we can extend the discovery operations launched in a local network to other networks, so a client that supports the local mechanism is able to find resources and devices connected to other networks which reside in other locations in a transparent manner, without being specifically prepared for such operation. We propose to integrate mDNS [9], for the local scope, with a DHT mechanism that can provide load balancing, scalability, and robustness when extending the search and discovery operations to the global scope. Also, in the local part of the operations, we can incorporate the optimizations introduced by lmDNS, as previously described in this document, which is specifically designed for IoT workloads. As discussed in previous subsections, DHTs are structures which store key/value mappings across a set of nodes, which in turn relate to the overlay network that hosts the DHT. For our approach, Chord [26] a widely used and well-known overlay network routing algorithm and DHT across the research community will be used. However, the approach discussed here can be adapted with any other overlay network and DHT approach, since they share the same capabilities in terms of key/value storage and retrieval, and therefore offer a similar interface to the client application. To perform the integration, we designed a connector artifact (mDNS / DNS-SD Connector) to be instantiated in the local domain of the elements (resources, devices) that should be published globally. This element uses mDNS/DNS-SD to find the available resources and devices in the local network or location where it is deployed and publishes new entries for each of the DHTs, storing a relation of capability/location (key/value) for each capability offered by the resources.

37

D3.1 Open Service Layer

Figure 8: Distributed approach, such as the used for OpenDHT

The architecture of the connector, as shown in Figure 9, operates as follows. The devices use mDNS to discover the available services both in their local and in the global network. The “Connector” component is then in charge of knowing the services offered in its local network and publishes (offers) them through the “Information Infrastructure” (I2), which is an overlay network, and DHT, initially built with Chord. The connector component is also in charge of receiving mDNS requests and performs the discovery operation by searching through the I2 to obtain the external services published by other connectors that match with the query, and finally sending the results to the requester client. The steps followed by the architecture for global resolution are as follows: 1. A client sends an mDNS request asking for a service (for instance, to know the temperature of a certain cabin/room). 2. The sensors (things) deployed on its local network check if they offer the service and respond to the request. 3. The connector attached to the same local network of the client also receives the request and searches the I2 to find external services matching the request. 4. The I2 gets the results (DHT entries) in DNS-SD format from the other connectors offering the requested service and transfers them to the requesting connector. 5. Finally, the requesting connector sends the aggregated responses to the client. As commented above, the main advantage of this mechanism is that the clients only need to know the mDNS/DNS-SD discovery mechanism, which is widely deployed, but specifically designed for local network discovery. Transparently, the connector will give the answers if there are external devices and resources. Another benefit is that the queries are the same for local and global discoveries, so the clients do not need to have the special notion of global resources/devices and global searches. Finally, in contrast with other global search and discovery mechanisms, the clients from their perspective receive the necessary records to use the service in just one step. Most of this Deliverable is concerned with services and applications on the Intranet which is the heart of IoT. In this case, it is most common to locate any repository on the Intranet or a well-known location known to the domain. However, there are other classes of application

38

D3.1 Open Service Layer which are essentially very wide-area in scope and in these cases, it is important to have a storage architecture with access or even instantiation widely dispersed. Most of the repositories discussed in this Deliverable have not taken into consideration such a distribution. There are certain ones, in particular the Handle System [18] which are both IPv6-enabled and have paid considerable attention to ensure global replication and reach. We expect in later Deliverables to study these in more depth, and to work more closely with the Handle System.

Figure 9: Overview of the mDNS / DNS-SD Connector, showing how to publish globally the resources and devices found on each local network

4.3.3.2 Elliptic approach (digcovery) Digcovery is a service with a full vision of the accessible Local Resource Directories and which allocates the Local Resource Directories suitable for the required search engines. Its scope is limited to deployments such as a building. In the case of a campus or a smart city, more scalable solutions such as Distributed Hash Tables need to be considered. Section 7 offers an overview of them all. In addition to the discovery architecture described, Sections 3 and 7 are analyzed in further detail regarding the different options to implement in the local and global directory architectures respectively. The semantic description of the resources and services presented in Section 6 are also considered relevant. Finally, a search engine is required although these are inherent to some of the local and global directories exposed. A search engine optimized for RESTful architectures will be presented in Section 6. Digcovery is not the same as a M2M platform which sits between the things and the clients as shown in the following picture, where the M2M platforms offers an abstract interface from the things to the clients, and acts as a proxy for the communication. Digcovery rather offers a medium to discover, export and interact with things in order to check the proper functionality before it is exported.

39

D3.1 Open Service Layer

Figure 10: Classic M2M interaction between Things and Client based on proxy approach Once the sensor has been discovered through digcovery, CoAP messages can be sent directly from the user of digcovery to the sensor without going through the digcovery core. This is due to an embedded CoAP client implemented into digcovery through AJAX technologies, which launches this from the client side such that it appears to have been sent by some other CoAP client such as Copper [57]. In addition, this offers a mechanism by which to export it to M2M platforms such as GSN, so that the M2M platform can request the services to the things without the use of digcovery. In the same way, it provides a mobile client able to discover services by geo-location, and communicate directly through a CoAP client for Android. This mobile application is currently under development and will allow the discovery of heterogeneous sensors through digcovery in order to obtain information and - in the future - be able to turn off a light or turn on an air conditioner.

Figure 11: Example of the mobile application that is currently being developed

The details of these mechanisms which provide end-to-end communication will be provided in Deliverable D3.3.

40

D3.1 Open Service Layer

Figure 12: New Dynamic, flexible and elliptic approach from Digcovery

4.4 Integrating resources (things, devices and tags) The architecture presented considers the core, the value chain/network from the end-product to the backend systems from the manufacturer, and the global ecosystem which encloses these. The Internet of Things ecosystem is composed not only of IPv6-enabled devices, tiny objects with communications and processing capabilities (so-called Smart Objects), but also of diverse species of objects. The sensors and actuators from the physical world have been developed to satisfy specific needs, such as RFID and NFC for transportation in terms of logistics and ticketing respectively. Healthcare devices used for continuous patient monitoring through wireless sensor networks, based on (for example) ZigBee Health Device Profile and Bluetooth have specific requirements and needs such as mobility, real-time, and reliability. Building and home automation with legacy technologies and proprietary protocols for lighting, heating, cooling, and security require real-time reaction and reliability. Smart meters and smart grid applications require mainly reliability. In the retail sector smart tags are replacing barcodes which require energy-efficiency, low-cost, real-time, and scalability. The aforementioned requirements have led to different technical alternatives and standards from ETSI such as M2M architectures focused on cellular networks. Whilst these provide support for reliability, they have low energy-efficiency, high cost and relatively poor scalability. IETF with the Working Groups ROLL, 6LoWPAN, CORE, LWIG, and COMA is focused on constrained devices and sensors to provide low cost protocols with high energy-efficiency and

41

D3.1 Open Service Layer scalability, but with no mobility support. EPCGlobal for RFID provides low cost, real-time wireless identification, but the sensing capabilities are not standardized yet. Finally, the Handle System for Digital Object Identifiers offers low cost, real-time, identification and does not rely on a pre-defined medium to carry out the identification, (usually barcode or physical identification). This integration of the different resources is handled by digrectory which is responsible for collecting either by NFC, 6LoWPAN, Bluetooth or mDNS, the offered services by various devices that inhabit its domain. Digrectory can register these services as long as it knows the point of origin. Furthermore, the collected services are stored in a DNS file of bind9 program, allowing services to be consulted at all times through DNS. The following Figure presents the resources considered, which are mainly Smart Objects identified through IPv6 address. These objects are connected through technologies such as 6LoWPAN, lwIP, IPv6 addressing Proxy and GLoWBAL IPv6. They may have been identified through (for example) the Digital Object Identifier (DOI) via the Handle System, or RFID resources identified through Electronic Product Code (EPC) and Universal ID (UID), or the Host Identity Protocol (HIP), which it is being integrated into the Internet of Things architecture through its lightweight version denominated HIP Diet Exchange (HIP DEX).

Figure 13: Resources ecosystem

42

D3.1 Open Service Layer

5 IPv6-based Smart Object discovery protocol

5.1 Lightweight Look-up and discovery (lmDNS)

5.1.1 Functionality description lmDNS-SD offers a set of implementation guidelines and design recommendations in order to make suitable the use of mDNS in Smart Objects. First, let us introduce the most common records from DNS: - A: address record for an IPv4 address. - AAAA: address record for an IPv6 address. - CNAME: Alias of one name to another name. - NS: For the delegation of a DNS Zone. To an authoritative . - Others: MX for the email, HIP for the HIP identifier, LOC for the locator and others related with security stuff. mDNS and DNS-SD are extensions of DNS with additional functionality for the records PTR and TXT. The most relevant for mDNS and DNS-SD are: - PTR: This record is used for the reverse DNS look-ups, i.e. from address to name. But, this presents a total different use for mDNS and DNS-SD, since it is used for the description of the services; and mDNS uses it to filter the queries.

A usual discovery should start with mDNS protocol in the local domain or DNS-SD from a more global approach. This is to discover the devices which are offering the type of service required. For this purpose, the PTR record is used since it can define multiples pointers for a device depending on the functionality, family, type of device etc… For example, Table 3 presents multiple PTRs pointing to a light from our lab, called light_lab.

Table 3: PTR record for light_lab

;Type _lamp._sub._coap._udp PTR light_lab ;Services _status._lamp._sub._coap._udp PTR light_lab _onoff._lamp._sub._coap._udp PTR light_lab _dimmer._lamp._sub._coap._udp PTR light_lab ;Technology _x10._lamp._sub._coap._udp PTR light_lab

The discovery of these services can be carried out with a mDNS client such as Avahi or Bonjour, in addition to the common DNS look-up services. Figure 14 presents the discovery of a resource based on Avahi, and Table 4 presents a query based on dig.

43

D3.1 Open Service Layer Table 4: Discovering a type of object through mDNS2

;_lamp._sub._coap._udp.rd.esiot.com PTR ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62392 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION: ;_lamp._sub._coap._udp.rd.esiot.com. IN PTR

;; ANSWER SECTION: _lamp._sub._coap._udp.rd.esiot.com. 604699 IN PTR light_lab.rd.esiot.com.

;; Query time: 51 msec ;; MSG SIZE rcvd: 73

Figure 14: Avahi discovery3

Once the name of the service offering what you are looking for is known, this service is requested. Then, the SRV record is used. - SRV: Generalized service location record. It is like MX but for any service. This defines which machine supports what service and on what port. The syntax is: SRV [priority] [capacity] [ttl] [hostname].

2 Command details: LINUX: dig _lamp._sub._coap._udp.rd.esiot.com PTR WINDOWS: nslookup -q=ptr _lamp._sub._coap._udp.rd.esiot.com. 3 Avahi publish command details: avahi-publish-service light1.rd.esiot.com _coap._udp 1234 rt=light ins=2 lt=86400 model=dimmer if=EIB area=1 zone=2 deviceID=3 value onoff

44

D3.1 Open Service Layer Priority and capacity parameters choose the client among the different options when several hosts are offering the same service.

Table 5: Looking up the service associated with the light found

;light_lab.rd.esiot.com SRV ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6373 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION: ;light_lab.rd.esiot.com. IN SRV ;; ANSWER SECTION: light_lab.rd.esiot.com. 604800 IN SRV 0 0 1234 light1.rd.esiot.com. ;; Query time: 118 msec ;; MSG SIZE rcvd: 79

This table defines that the service is located at the hostname light_lab.rd.esiot.com. We can now solve the TXT entry in order to obtain more information about this device

- TXT: This contains metadata for the client. The format is '[key]':'[value]' and the contents depend on the protocol. For example, DNS-SD defines the format for these records depending on the type of records, in a similar way resource discovery from CoRE defines the link format description on these records [11].

Table 6: TXT entries with the extra information of the found light

; light_lab.rd.esiot.com TXT ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16345 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;light_lab.rd.esiot.com. IN TXT ;; ANSWER SECTION: light_lab.rd.esiot.com. 604770 IN TXT "onoff\;status\;dimmer" light_lab.rd.esiot.com. 604770 IN TXT "if=X10\;housecode=A\;unitcode=5" light_lab.rd.esiot.com. 604770 IN TXT "rt=light\;ins=1\;lt=86400\;model=normal" ;; Query time: 53 msec ;; MSG SIZE rcvd: 163

TXT entries are designed to be associated with the SRV entry offering extra information (metadata). Usually, it is defined as a single '[key]':'[value]' per record such as those found in Figure 14 in Avahi. In order to optimize it to reduce the number of records, it can be associated by services, resource type (rt) and interface (if) such as presented in Table 6.

45

D3.1 Open Service Layer However, if presenting a high overload (176 bytes) continues, it can be then considered a unique record as shown in Table 7, following the naming conventions that describe how services will be represented in DNS records, as defined by Web Linking description, specifically, the version of Link format defined under the CoRE IETF working group [12].

Table 7: TXT query of the found light in a single TXT record

; light2.rd.esiot.com TXT ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19187 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;light2.rd.esiot.com. IN TXT ;; ANSWER SECTION: light2.rd.esiot.com. 604800 IN TX "rt=light\;ins=2\;lt=86400\;model=dimmer\; if=EIB\;area=1\;zone=2\;deviceID=3\;value\;onoff" ;; Query time: 79 msec ;; MSG SIZE rcvd: 130

Once, all the information about the hostname of the resource (SRV) and service description with extra information (TXT) have been obtained, the IPv6 address of the device which is reachable through technologies such as the aforementioned 6LoWPAN and GLoWBAL IPv6 needs to be resolved.

Table 8: AAAA query of the found light

; light1.rd.esiot.com AAAA ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60429 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION: ;light1.rd.esiot.com. IN AAAA ;; ANSWER SECTION: light1.rd.esiot.com. 604800 IN AAAA 2001:720:1710::11

;; Query time: 75 msec ;; MSG SIZE rcvd: 65

A-records for backward compatibility can be considered with the current Internet infrastructure based on IPv4, and also other addressing and identification spaces such as Universal Identifier (UID) from RFID or novel protocols such as Host Identity Protocol (HIP).

46

D3.1 Open Service Layer 5.1.2 Satisfaction of the defined design issues  Scalable. As mentioned in Section 4, DNS-SD and mDNS present a scalable and decentralized architecture, which define services at the local level through mDNS and at the global level through the hierarchical delegation of domains servers to locally managed repositories with DNS-SD. These local repositories can be located at the Border Routers from solutions such as 6LoWPAN.  Dynamic. mDNS allows the description change of the services at a host-level, but even with the DNS-SD solutions the repositories will be allocated at a local level since they are visible and accessible globally to allow the updating of records easily and dynamically. Services needing very dynamic changes (e.g. the records associated with mobile nodes) are generally not suitable for storing in DNS caches, but this problem can be solved by defining low lifetimes, i.e. fine-grained lifetime management with the max-age attribute for records which are susceptible to changes  Sleep mode. This presents high challenges for the solutions based on approaches where the endpoint is directly queried as in the mDNS solution, especially in cases where the duty cycle is very low. For these cases, the buffering of the requests is delegated to the coordinator and Border Router, in order to send the requests to the sensor when it is awake. Following the buffering approach, and with the purpose of avoiding the need to exchange messages with the end-node, DNS-SD in the Border Router can be applied so that the entries are directly offered instead of querying the end node. Also being considered are the extensions of the directory with mirror proxy functionality for values [58]. The sleep nodes can delegate resource hosting to the proxy in order to make the resources available while they are sleeping.  Payload size. The original frame size from technologies such as IEEE 802.15.4 is 127 bytes, and this is reduced to 61 to 76 bytes. The DNS protocol usually includes a section of additional records and another section with the authority record or a higher overload. For that reason, as presented in the design issues and requirements (Section 3), the inclusion of these additional and authority records4 needs to be avoided. For example, Table 9 presents an equivalent query to that carried out in Table 5 for the discovery of SRV entries associated to the service to consult. The addition of the extra information presents a packet size of 188 bytes instead of the original 79 bytes. What this means is that a packet with lmDNS-SD fits in a single frame, while the normal use of DNS-SD and mDNS requires 3 frames.

Table 9: SRV query of the found light without optimizations

;; search(light_lab.rd.esiot.com, SRV, IN) ;; query(light_lab.rd.esiot.com, SRV, IN)

;; send_udp(94.142.247.17:53): sending 40 bytes ;; timeout set to 5 seconds ;; answer from 94.142.247.17:53: 188 bytes ;; HEADER SECTION ;; id = 9950

4 It can be removed with dig using the options +noauthority +noadditional

47

D3.1 Open Service Layer ;; qr = 1 opcode = QUERY aa = 0 tc = 0 rd = 1 ;; ra = 1 rcode = NOERROR ;; qdcount = 1 ancount = 1 nscount = 1 arcount = 4

;; QUESTION SECTION (1 record) ;light_lab.rd.esiot.com. IN SRV ;; ANSWER SECTION (1 record) light_lab.rd.esiot.com. 602400 IN SRV 0 0 1234 light1.rd.esiot.com.

;; AUTHORITY SECTION (1 record) rd.esiot.com. 602400 IN NS rd.esiot.com.

;; ADDITIONAL SECTION (4 records) light1.rd.esiot.com. 602512 IN A 155.54.210.163 light1.rd.esiot.com. 602400 IN AAAA 2001:720:1710::11 rd.esiot.com. 602400 IN A 155.54.210.159 rd.esiot.com. 602400 IN AAAA 2001:720:1710:0:216:3eff:fe00:9

The description of the services (TXT) should be simplified as much as possible in order to fill in a single frame. The TXT needs to be defined in a unique entry following some formats such as the aforementioned link format. For example, Table 6 presents the same content as in the Table 7 but simplified with the link format contained in a single entry. The difference can be seen between the usual query in Table 10 and the version of lmDNS-SD in Table 7 which is 130 bytes instead of 221 bytes. In addition, the TXT entry should be simplified and reduced further to values under 80 bytes making it feasible for a single 6LoWPAN packet. This reduction could come through the use of wildcards for the identification of the parameter types, or through compression techniques such as LZ77.

48

D3.1 Open Service Layer

Table 10: TXT query of the found light

;; search(light_lab.rd.esiot.com, TXT, IN) ;; query(light_lab.rd.esiot.com, TXT, IN) ;; send_udp(94.142.247.17:53): sending 40 bytes ;; timeout set to 5 seconds ;; answer from 94.142.247.17:53: 221 bytes ;; HEADER SECTION ;; id = 24910 ;; qr = 1 opcode = QUERY aa = 0 tc = 0 rd = 1 ;; ra = 1 rcode = NOERROR ;; qdcount = 1 ancount = 3 nscount = 1 arcount = 2

;; QUESTION SECTION (1 record) ;light_lab.rd.esiot.com. IN TXT

;; ANSWER SECTION (3 records) light_lab.rd.esiot.com. 604800 IN TXT "if=X10;housecode=A;unitcode=5" light_lab.rd.esiot.com. 604800 IN TXT "rt=light;ins=1;lt=86400;model=normal" light_lab.rd.esiot.com. 604800 IN TXT "onoff;status;dimmer" ;; AUTHORITY SECTION (1 record) rd.esiot.com. 604800 IN NS rd.esiot.com. ;; ADDITIONAL SECTION (2 records) rd.esiot.com. 604800 IN A 155.54.210.159 rd.esiot.com. 604800 IN AAAA 2001:720:1710:0:216:3eff:fe00:9

 Global query. DNS-SD is already accessible globally, but continues requiring the specification of the domain under which to carry out the query. In order to make it more scalable and able to discover the domain, we need to know where are the available resources types in which we are interested. Our current ongoing work is focused on the definition of a P2P architecture based on an overlay built with Chord over the lmDNS-SD architecture. This is done to discover the DNS-SD directories and domains of interest through the Distributed Hash Tables (DHT) from the different domains [26]. Further details about the digcovery approach for an elliptic approach to support global discovery are in Section 9.  Multi device operations. To support multiple devices, additional entries based on multicast can be defined in the DNS-SD. The query for _all lights will point to a which will be linked to all the lights from that room, building or domain. Some examples of multicast for building control are found in [59].  Based on existing technologies. Based on DNS and its extensions DNS-SD and mDNS.  Semantic description. A common description of the services and attributes in order to carry out the queries needs to be defined. (See Section 6).

49

D3.1 Open Service Layer 5.2 Light-weight Resource and services directory (DNS-SD) IoT devices have a low amount of memory, CPU and energy, so it is interesting to use CoAP which is an application layer protocol for networks of these limited-capacity devices. Service discovery is concerned with finding the IP address, port, protocol, and possible path of a named service. Resource discovery is a fine-grained enumeration of resources (path- Names) of a server. The CoAP link format can be used to enumerate attributes and populate the DNS- SD database in a semi-automated fashion. CoAP resource descriptions can be imported into DNS-SD for exposure to service discovery. The values stored in the DNS-SD directory are extracted from the information stored in the Resource Directory associated with a set of CoAP hosts.

It is assumed that a Resource Directory exists per 6LoWPAN [RFC4944], possibly running on the edge router. The DNS-SD provides a larger scope by storing the information of all services over a set of interconnected 6LoWPANs. Whereas the Resource Directory is possibly adequate for home networks, the handling of multiple Resource Directories can be quite cumbersome for many of the 6LoWPANs envisaged for offices. However, during network configuration, the Resource Directory can be used as long as the DNS is not yet accessible.

The DNS-SD approach is complementary to the more fine-grained resource discovery and fits better the concept of service by discovering servers with given properties. DNS-SD supports a hierarchical approach to the naming of the services and provides a directory structure that scales well with the network size as shown by its present-day operation.

5.2.1 DNS-Based Service Discovery Records

DNS-Based Service Discovery (DNS-SD) defines a conventional method of configuring DNS PTR, SRV, and TXT records to facilitate discovery of services (such as CoAP servers in a subdomain) using the existing DNS infrastructure. DNS-SD Service Names are limited to 255 octets and are of the form:

Service Name = {Instance}.{Service}.{Domain}

The {Domain} part of the service name is identical to the global (DNS subdomain) part of the authority in URIs that identifies the resources on an individual server or group of servers.

The {Service} part is comprised of at least two labels. The first label of the pair is an underscore character generally followed by the application protocol name [I-D.ietf-tsvwg- iana-ports]. The second label is always "_udp" for CoAP services. In cases where narrowing the scope of the search may be useful, these labels may be optionally preceded by a subtype label (beginning with an underscore) followed by the "_sub" label. An example of the {Service} part is "_lamp._sub._dali._udp". Only the rightmost pair of labels is used to name SRV and TXT records. The default {Instance} part of the service name may be set at the factory or during the commissioning process. It uniquely identifies a {Service} within a {Domain}. Taken together, these elements comprise a unique name for an SRV record (and optionally a corresponding TXT record) within the DNS sub-domain. The service instances (value of PTR records) are the labels of the SRV, AAAA and TXT records describing the service instance. The SRV record specifies the location (authority) and

50

D3.1 Open Service Layer the port number. The AAAA record specifies the IP-address, while the TXT record specifies the subtype and the data representation of the legacy parser (e.g. if = ZigBee).

5.2.2 DNS group Another aspect is the grouping of servers. In the former section, the names of the services are standardized names; however, for group names this is less probable. Usually the group names are application-specific or are standardized by the manufacturer.

When a multicast message is sent to a group, the path of the accessed resource must be strictly the same for all servers. The naming of the path is typically a responsibility for the standardization organizations describing the command set for a given application area. However a constraint exists in the case of multi-function devices which host multiple resources of the same type.

5.2.3 Starting CoAP devices To start a network of sensors that provide services, one must: - Define the URI (location) - Assign an IP address to the URI - Map the unique device identifier to the URI

When an architect has designed the building and described all light points, ventilators, heating and cooling units, and sensors, it is necessary to identify all these devices spatially and functionally. Storing the triple .. into DNS-SD represents the commissioning process. The Instance is the unique identifier given to the device in the factory but which has no relationship to its later location. The Service together with the Domain represents the spatial and functional aspects of the device as specified by the architect.

5.2.4 Proxy discovery Proxies will be used in CoAP networks for at least two major reasons: 1. Http/coap proxy. 2. Proxy of service on battery-less device.

The first proxy is probably implemented as forward proxy, while the latter is probably implemented as backward proxy. The battery-less device will only rarely (i.e. when it is not sleeping) and during installation, answer the GET /.well-known/core request. The return data is used by the installation tool to make the proxy device return the same resource names on /.well-known/core as is returned by the sleeping device. An installation tool installs on the proxy all the resources of the sleeping device for which the proxy is assumed to answer. Consequently, the proxy is discovered as a multi-server host with as many path names as the proxies sleeping servers. The servers on sleeping devices should not be discoverable via DNS-SD. However, AAAA records are generated for the sleeping device host name. This host name is used by the proxy to subscribe to the "sporadic" services of the sleeping device.

51

D3.1 Open Service Layer 5.2.5 Network architecture for DNS-SD through CoAP

Legacy

Device C oA P Host

CoAP C oA P Ho st Gatew ay

C oA P Legacy Host Device Figure 15: DNS-SD through CoAP interaction Figure 15 represents the network architecture with heterogeneous devices. The CoAP gateway connects one link with two legacy devices with the wireless CoAP network composed of three CoAP hosts. The CoAP hosts can freely exchange data representations according to the CoAP protocol over the wireless 6LoWPAN network. The host can send data representations to the CoAP gateway which passes them on to the specified legacy host. The legacy device returns data to the requesting CoAP host via the same gateway.

The CoAP hosts can address the legacy devices behind the gateway in at least 4 ways:  All devices of legacy network share the URI with the CoAP gateway. Every legacy device is a resource for the gateway as seen from the CoAP host. Consequently, the CoAP host sends the message to the IP address of the gateway and the gateway parses the URI-Path to determine the specified legacy device.  All devices of legacy network have IP addresses different from the IP address of the gateway. Consequently, a CoAP host sends the message to the IP address of the specified device. The routing protocol on the CoAP network makes the message arrive at the CoAP gateway. The gateway determines the specified legacy device from the destination IP address.  All devices of legacy network have different authorities. The authorities of the legacy device resolve to an IP address of the gateway. This means that the possibly lengthy authority names need to be transmitted. The gateway recognizes the authorities and maps authority to legacy device.  All devices of legacy network have different ports. This can be expressed in two ways 1. As :port in the URI. 2. In the DNS-SD records. In the latter case the port is defined in the UDP header and is efficient in packet header size.

52

D3.1 Open Service Layer The major advantage of all four approaches is that the gateway only handles the URI or IP address and port number in selecting the destination legacy device independent of the type of legacy device and the contents of the legacy payload of the message. In the Figure 15, the gateway located at the CoAP part of the legacy systems (left side) connects to a single link.

6 Semantic services description

A common description of the services, and attributes in order to carry out the queries, needs to be defined. It is a collateral requirement to define the mechanisms to filter adequately the type of resources and services to be queried. Currently, several different ways are being developed to establish a common representation for queries between the Internet of Things. Some of the work in this field is identified below. Specifically, the IPSO Alliance is defining a common family of interfaces and resource types for the Resource Directory from CoRE [60]. This could be re-used in a similar way as it is re- used in the link format. IPSO is defining on the one hand, a simple set of interfaces based on CoAP and plain text and on the other hand, a more structured version based on JSON with the semantic from SenML. As an alternative, more complex solutions such as Triple Spaces on RDF [61] can be defined which allows the retrieval, creation, modification or deletion of resources in the RDF graphs. This knowledge representation is based on a common ontology, which all the entities involved in the communication share. The queries over RDF can follow a pattern similar to what was defined in CoAP based on a triple pattern with wildcards (e.g. the format for queries ?s) or also more sophisticated and complex solutions such as SPARQL. It can be also applied with the description of devices through Device Profile for Web Services (DPWS), based on SOAP or REST for the Web of Things [62]. On-going work is mainly focused on the research about the capabilities to integrate the description of resources through RDF in the TXT records from DNS, and the query through mDNS following the described PTR patters as if they were wildcards. The description of the resources will follow the work done by GSN and SPITFIRE [63]. In summary, regarding the semantic description, we are mainly interested in offering a semantic layer for sensor discovery and provisioning and for integrating CoAP and 6LoWPAN in this framework, through the overlay discovery based on Chord or a Resource Directory based on DNS-SD. This is in-line with the OpenIoT (see Subsection 6.1.1) and SPITFIRE (see Subsection 6.2.3) approaches.

6.1 Ontology-based Resource Description and Discovery Framework The ontology-driven approaches for search and discovery are increasing because of the power behind the semantic representation of linked data, including the description of resources and devices. Thus, in [64] we find an approach for an ontology-based resource description framework, developed particularly for ICT energy management purposes, where the focus is on energy- related semantic of resources and their properties. It proposes a scalable resource discovery method in the large and dynamic collections of ICT resources, based on semantics similarity inside a federated index using a Bayesian belief network. The framework allows users to identify the cleanest resource deployments in order to achieve a given task, taking into account the available energy source.

53

D3.1 Open Service Layer This approach also uses the RDF data model to describe the resources to be stored in the database of the discovery system and uses a semantic analyzer to perform information processing based on such descriptions in order to determine the association of resources to the ontology concepts. The end user stack begins with a request for resources. The query is analyzed and keywords are extracted and processed by the semantic analyzer. Based on its knowledge base, it determines the required resources and their locations. If a resource is available, it will be triggered. In order to provide a powerful search method based on the proposed ontology, this architecture uses a Bayesian semantic graph which combines semantic inference and probability. The key idea is to integrate semantic links and reasoning rules which determine the cluster the resource belongs to by processing keywords and taking into account the context. As proposed in its ontology, a resource will be placed in one of the three categories: computing, storage and network. However, a user query like (RAM=64Mb, IP address=10.0.0.1) may lead to a confusion, because the result can be a server or a router. If further information, such as “bandwidth=1G” is added, a more accurate network resource could be found. The Bayesian semantic analyzer is used to deal with such confusion. It processes all the words in each resource description or user query and calculates the probability that the resource belongs to each cluster. Such a mechanism improves significantly the search operation. In order to define the concepts and clusters for the Bayesian network, the architecture builds a probabilistic table, as proposed in [65], which assigns joint probability values. The assignments are based on expert judgment and may be improved over time. When a resource has been analyzed and its keywords have been determined, it is represented by an RDF graph. This operation is achieved using an RDF query language, such as SPARQL.

6.1.1 OpenIoT Another EC project that focuses on the semantic integration of devices and services in the Internet of Things is OpenIoT. So far, no publications are yet available about the progress of the project, so the gathered information was mainly taken from the official project website5. OpenIoT is concerned with a variety of different areas: it plans to develop a middleware for sensors and sensor networks, describe internet-connected objects by ontologies, semantic models and open linked-data techniques and aims to provide an extension to cloud computing. Firstly, the project intends to integrate the Global Sensor Network (GSN) to provide the basis for looking up and registering internet-connected objects. Furthermore, OpenIoT is planned as middleware platform that is designed to support flexible configuration and deployment of algorithms for collection and filtering information streams stemming from the internet- connected objects5. At the same time the middleware will be responsible to generate and process important business and applications events. Cloud computing mainly describes the usage of internet-connected objects and machines in order to distribute processing power and storage capabilities onto different entities in a so called cloud and further make these capacities available as a service. The OpenIoT project aims to build on current state-of-the-art cloud computing middleware and equip it with the ability to use and configure sensor based services5 with respect to cloud and utility computing. As such, within OpenIoT it is planned to provide instantiations of cloud-based and utility- based sensing services and define so-called “Sensing-as-a-Service”, “Location-as-a-service”

5 http://www.openiot.eu

54

D3.1 Open Service Layer and “Traceability-as-a-Service” models for internet-connected objects. Another main research objective is semantic object-to-object interaction and communication. The interoperability between internet-connected objects in OpenIoT is to be assured with the help of semantic technologies like RDF and SPARQL. The project is geared towards enhancing existing interaction approaches like GSN with semantic sensor descriptions. As the goals of the OpenIoT project fit well with the goals of the IoT6 project, a collaboration between the two projects is desirable and planned.

6.1.2 SENSEI For the integration of the physical with the digital world, the SENSEI project6 creates an open architecture that especially addresses scalability problems for distributed wireless sensors and actuator networks. The architecture designed in this project allows for an easy and flexible plug and play integration of globally distributed sensors and actuators into a global system. A semantic description of devices aids to unify the view and access of distributed services and entities. Syntax and semantics of entities are contained in an advanced resource description, an ontology-based data representation with an RDF encoding. SENSEI, therefore differentiates between two main models, the information model and the resource model [66]: The information model developed in SENSEI defines three different abstraction layers for entity description namely raw data, observation and measurement and context information. Raw data solely describes a value of interest that is received from an entity. Observation and Measurement however, is defined to represent additional meta-information about the observed raw value. Context information introduces yet more relations to connect real world entities and represent their context. As such, SENSEI follows an entity-centric approach. These generic relations and concepts are designed to act as upper ontology for domain ontologies. In turn, the domain ontologies that base their definitions on the SENSEI upper ontology information model can be handled by the SENSEI system. Another model that is defined in the project is the resource model: All entities (sensors, actuators, processors) can be modelled as “resources” in SENSEI. Information about how a resource can be accessed, where it is located and what are its general functionalities is stored. The resource description that represents some of these properties is defined by the resource provider and stored in a so called resource directory where all static information of resources can be found. The specific operations of a resource are described in addition to the resource description. An additional semantic operation description can be associated with the definition of the operations and in turn describe inputs, outputs and functionalities of a specific resource operation in a machine-interpretable form. The SENSEI query mechanism uses the already mentioned resource directory and a so-called entity directory so that the relevant resources can be found. The entity directory hereby provides the link between entities and their attributes as defined in the context model with resources either providing the respective attribute values or executing an actuation according to the setting of the attribute value [66].

6.1.3 SSN-XG The W3C Semantic Sensor Network Incubator Group designed an OWL ontology called SSN-XG that describes sensors in a domain-independent way and as such facilitates semantic

6 http://www.sensei-project.eu

55

D3.1 Open Service Layer interoperability between sensors in sensor networks on the Internet of Things [67]. This developed classification can be used in the IoT to describe sensors and make their semantic representation globally available. One main focus of the SSN project is to develop ontologies for describing sensors and sensor networks. The second focus is the semantic annotation of sensor descriptions already available. Therefore, the SSN-XG realizes an extension of the Sensor Markup Language (SML), which is one of the four Sensor Web Enablement (SWE) languages defined by the Open Geospatial Consortium (OGC). This way, a support for semantic annotations of sensors described according to the OGC standard is realized and the combination of different services and applications becomes possible. As reported by [68] the SSN ontology offers a sensor view with a focus on what senses, how it senses and what is sensed, a data view with a focus on observations and metadata, a system view with a focus on systems of sensors and a feature view with a focus on physical features, properties of them, what they can sense and what observations of them are made. Sensors in SSN-XG are described as entities that follow sensing methods and have a feature of interest. Sensor entities may be physical devices but can also be processes and methods that observe some certain phenomena. Due to the event-based nature of sensors and sensor networks, SSN- XG further considers temporal relationships. For grouping sensors, the SSN-XG ontology provides the “system” concept. A system can further be composed of sensors or split into several subsystems. The process module of the ontology further allows defining the function that is implemented by the described sensor. Other main concepts of the ontology describe the measurement capabilities of modeled sensors as well as the situations that are observed, i.e. the observations and the associated observation data.

6.1.4 IoT-A The IoT-A project7 realizes an architectural reference model for the Internet of Things. The main focus of the project is the integration and interoperation of devices that are often stand- alone and not integrated due to heterogeneous standards and protocols. In this context, there exist several focus areas of the IoT-A project. First, the project aims to define a commonly agreed upon architecture of the IoT which is currently nonexistent. Another aim of the project is to realize a distributed orchestration mechanism to efficiently deal with real world dynamics and changing availability of IoT devices. Further, the designed architecture will enable interoperability between devices by hiding the complexity of the end-to-end heterogeneity from the communication service and providing translation mechanisms between technology-specific communication protocols. Additionally, the project develops a dynamic look-up and discovery mechanism for IoT devices. In order to reach interoperability and the sophisticated discovery of IoT entities a common architectural reference model for IoT is needed that represents a common domain vocabulary. In IoT-A, this model description is realized as OWL ontology. In [69] the authors describe semantic modeling for components of the IoT domain in the context of the IoT-A project with the help from the OWL-DL language. Therefore, they reuse certain concepts from other already existing ontologies and projects such as SSN-XG8 and SENSEI9. Mainly, the IoT information model designed in IoT-A is split into three different parts, entity, resource and service. As one of the main goals of the IoT is to extend the Internet into the physical world with devices and other physical entities being directly accessed and operated on from the

7 http://www.iot-a.eu 8 http://www.w3.org/2005/Incubator/ssn/ 9 http://www.sensei-project.eu/

56

D3.1 Open Service Layer Internet, in IoT-A the so-called entity model describes the observable features of an entity. As such, information like location, temporal features or domain attributes are described in this model for a concrete device in the physical world that is attached to it. The resource model in IoT-A further represents this entity in the digital world, and describes the physical type of the resource and its interfaces. The third part is the service model, which elaborates the service type and identifies services by inputs, outputs preconditions and effects. It also presents how to access a specific service and other technical details.

6.1.5 IoT.est The EC project IoT.est10 is focused on providing a service creation environment architecture that is designed to accelerate the introduction of new IoT enabled business services. This architecture will be designed to enable the orchestration of business services based on re- usable IoT service components, self-management components for automated configuration and testing and the abstraction of the heterogeneity of underlying technologies. As such, the project brings together the three disciplines of Internet of Things, service engineering and testing. In the current Internet of Things, services are many times specifically created for applications and domains, and therefore a high heterogeneity of networks, communication protocols and information types exists. For that reason, a very important issue is the need for interoperability between these solutions. One of the goals of the project is therefore the implementation of a service creation environment that overcomes the heterogeneity between networked sensors and objects. IoT.est focuses on four key issues: first the project researches methods to semi-automatically derive services and related tests from semantic service descriptions; the second goal is to integrate testing into a service creation environment and support an incremental service evolution; the third goal is to further the definition of a framework for service validation tests that includes automated deployment procedures based on semantics lies in the focus of the project; the final goal is to pursue the development of run-time monitoring to enable Quality of Service. The overall aim is to semi-automatically derive services and related tests from a so-called “semantic service”, including a high-level semantic description of service resources, network attributes and service test procedures. As a basis for the semantic description of service resources, the OWL ontology from the IoT-A project is reused. The main contribution of the work is an IoT service creation environment able to compose business services based on domain and environment knowledge10. Additionally re-usable IoT service components should be identified and evaluated by integrating testing in all phases of the service cycle. Therefore, test components and automated test mechanisms support IoT service development and provisioning in large scale infrastructures.

10 http://ict-iotest.eu

57

D3.1 Open Service Layer 6.2 Semantic Descriptions for the Internet of Things The following subsections describe each one of the semantic descriptions defined for the Internet of Things, and a comparative analysis among them is offered at the end.

6.2.1 IPSO Alliance Interfaces (IETF) Among the goals of the Constrained RESTful Environments (CoRE) working group can be found a REST architecture that fits to the constrained nodes and networks. Its draft-shelby- core-interfaces-03 defines several functionalities that cover the needs of the Internet of Things transmission technologies. Next we review some points of the interface set for the standard. The first point is the definition of a Function Set that consists of input, output and parameter resources containing internal logic and may have a subset of mandatory inputs, outputs and parameters to provide the minimum interoperability. The IETF draft proposes a common representation for the binding between M2M devices defining a format based on the CoRE link format document. This format represents the binding information accompanied by a set of rules in order to define a binding method which is defined as a specialized relationship between two resources (M2M). As defined in the CoRE Resource Directory all resources and services offered by a device should be discoverable either through a direct link in /.well-known/core or by following successive links starting from /.well-known/core, defined in the ietf-core-link-format document. The next table shows an example extracted from draft-shelby-core-interfaces-03 in order to illustrate the discover procedure.

Table 11: Discover services example Req: GET /.well-known/core Res: 2.05 Content (application/link-format) ;rt="simple.sen";if="core.b", ;rt="simple.sen.lt";if="core.s", ;rt="simple.sen.tmp";if="core.s";obs, ;rt="simple.sen.hum";if="core.s", ;rt="simple.act";if="core.b", ;rt="simple.act.led";if="core.a", ;rt="simple.act.led";if="core.a", ;rt="simple.dev";if="core.ll", ;if="core.lb",

The interface section of draft-shelby-core-03 describes REST interfaces for Link List, Batch, Sensor, Parameters, Actuators and Binding table resources. Some variants such as Linked Batch or Read-Only Parameter are also defined. The interfaces support the use of plain text and/or SenML Media types to define its payload. The next table shows a relation of methods defined for each resource mentioned before and the column “if=” defines the Interface Description attribute value, that its used in the CoRE Link Format for a resource according to the interface.

Table 12: Defined interfaces in draft-shelby-core-interfaces-03

Interface if= Methods

Link List core.ll GET Batch core.b GET, PUT, POST (where applicable)

58

D3.1 Open Service Layer

Linked Batch core.lb GET, PUT, POST, DELETE (where applicable) Sensor core.s GET Parameter core.p GET, PUT Read Only Parameter core.rp GET Actuator core.a GET, PUT, POST Binding core.bnd GET, POST, DELETE

In order to retrieve (GET) a list of resources on a web server it is only necessary to use the Link List interface where the request should contain an Accept option with the application/link-format content type. This option may be elided if the resource does not support any other form. This request returns a list of URI references expressed as an absolute path to the resources as defined in CoRE Link Format document. The next table shows an example from draft-shelby-core-interfaces-03 that illustrates how the Link List /d option works in practice. It can be seen that the resource contains two sub- resources named /d/name and /d/model.

Table 13: Example of using the Link List interface Req: GET /d (Accept:application/link-format) Res: 2.05 Content (application/link-format) ;rt="simple.dev.n";if="core.p", ;rt="simple.dev.mdl";if="core.rp"

To manipulate a collection of sub-resources concurrently the Batch interface that supports the same methods as its sub-resources in order to retrieve (GET) and set (PUT) or (TOGGLE) the values of those sub-resources should be used. The support to multiple sub-resources requires the use of SenML in this interface and as extension of the Link List interface, must support the same methods. The following example interacts with a Batch /s to retrieve the values of several resources of this directory.

Table 14: Example of using the Batch Interface Req: GET /s Res: 2.05 Content (application/senml+json) {"e":[ { "n": "light", "v": 123, "u": "lx" }, { "n": "temp", "v": 27.2, "u": "degC" }, { "n": "humidity", "v": 80, "u": "%RH" }], }

The Linked Batch is an extension of the Batch interface, which is dynamically controlled by the web client and has no sub-resources. Instead, the resources forming the batch are referenced using CoRE Link Format and RFC5988. This is contrary to the basic Batch (a static collection defined by the web server). The following example illustrates this interface with several examples where it is used with POST and GET methods.

59

D3.1 Open Service Layer

Table 15: Example of using the Linked Batch interface Req: POST /l (Content-type: application/link-format) , Res: 2.04 Changed Req: GET /l Res: 2.05 Content (application/senml+json) {"e":[ { "n": "/s/light", "v": 123, "u": "lx" }, { "n": "/s/temp", "v": 27.2, "u": "degC" }, } Req: POST /l (Content-type: application/link-format) Res: 2.04 Changed Req: GET /l (Accept: application/link-format) Res: 2.05 Content (application/link-format) ,,

The sensor interface has been defined to retrieve values from a sensor. Either plain text and SenML formats can be defined as the Media Type but in order to retrieve single measurements requiring no meta-data, the use of plain text is recommended. The following example of the sensor interface request representations the same value shows how the use of SenML to retrieve a single value impacts strongly the length of the payload.

Table 16: Example of using the Sensor interface Req: GET /s/humidity (Accept: text/plain) Res: 2.05 Content (text/plain) 80 Req: GET /s/humidity (Accept: application/senml+json) Res: 2.05 Content (application/senml+json) {"e":[ { "n": "humidity", "v": 80, "u": "%RH" }], }

For configurable parameters and other information the parameter interface has been defined where a value of a parameter can be read (GET) or set (PUT). The next example shows a request for reading and setting a parameter.

Table 17: Example of using the Parameter interface Req: GET /d/name Res: 2.05 Content (text/plain) node5 Req: PUT /d/name (text/plain) outdoor Res: 2.04 Changed

The Read-Only Parameter interface is conceptualized for parameters that can be read (GET) but not set (PUT). This example shows a request for reading a parameter.

60

D3.1 Open Service Layer Table 18: Example of using the Read-only Parameter interface Req: GET /d/model Res: 2.05 Content (text/plain) SuperNode200

The actuator interface has been defined to model different kinds of actuators where the change of a value has an effect on its environment. Several actuators such a LEDs, relays, light dimmers, motor controllers, … can be manipulated through the read (GET) and set (PUT) methods. The use of POST (with no body) can be used to toggle an actuator between its possible values, for example a light that only can stay on/off. The following example shows a request for read, set and toggle an actuator which is a simple LED.

Table 19: Example of using the Actuator interface Req: GET /a/1/led Res: 2.05 Content (text/plain) 0 Req: PUT /a/1/led (text/plain) 1 Res: 2.04 Changed Req: POST /a/1/led (text/plain) Res: 2.04 Changed Req: GET /a/1/led Res: 2.05 Content (text/plain) 0

To manipulate the binding table, the binding interface has been defined where the new bindings for the table are appends by the use of a POST method and a content type of application/link-format. It requires that all the links contained in the payload must have relation type “boundTo”. The GET request returns the current status of a binding table and the DELETE request removes the table. The following example shows the request for adding, retrieving and deleting bindings in a binding table.

Table 20: Example of using the Binding interface Req: POST /bnd (Content-type: application/link-format) ; rel="boundto";anchor="/a/light";bind="obs";pmin="10";pmax="60" Res: 2.04 Changed Req: GET /bnd Res: 2.05 Content (application/senml+json) ; rel="boundto";anchor="/a/light";bind="obs";pmin="10";pmax="60" Req: DELETE /bnd Res: 2.04 Changed

The resource Observation is used to follow the changes in a resource and receive asynchronous notifications. For this purpose the ietf-core-observe document defines three query parameters described in the next table. Hence, if a resource is marked as Observable in its link description, this resource should support these Observation parameters. Note that the Change Step parameter is only supported by resources with an atomic number value.

61

D3.1 Open Service Layer

Table 21: Observable parameters

Query Parameter Value

Minimum Period (s) Pmin Xsd:integer (>0)

Maximum Period (s) Pmax Xsd:integer (>0)

Change Step St Xsd:decimal (>0)

The next table shows an Observation request using these three query parameters. The value of Observe indicates the number of seconds since the observation was made.

Table 22: Example of using the Observation request Req: GET Observe /s/temp?pmin=10&pmax=60&st=1 Res: 2.05 Content Observe:0 (text/plain) 23.2 Res: 2.05 Content Observe:60 (text/plain) 23.0 Res: 2.05 Content Observe:80 (text/plain) 22.0 Res: 2.05 Content Observe:140 (text/plain) 21.8

6.2.2 Representing CoRE Link Collections in JSON Since Internet of Things devices can be constrained nodes and networks, it is normal that the information exchange between nodes as a collection of Link List must be defined in CoRE Link Format. However, the use of JSON to represent this information could be more useful since the networks are able to manage more bandwidth and the information its integrated in information systems. This is the main reason why the CoRE working group proposes another activity to define a simple mapping to JSON that contains the information of the formats specified in WebLinking and CoRE Link Format. Mapping each web link (“link-value”) is a collection of attributes (“link-param”) applied to a “URI-Reference”, in other words, a JSON Object formed by name/value pairs (member) where the parameter name or attribute is named “parname”, the value of the parameter or attribute value is named “ptoken” or “quoted-string”. This last option can cause that the results need to be parsed as defined in CoRE Link Format. When an attribute is duplicated its values are then represented as a JSON array of string values. The URI is represented by the pair name/value “href” and the URI-Reference. The next example from draft-bormann- corelinks-json-01 illustrates this concept of mapping.

62

D3.1 Open Service Layer

Table 23: Example of mapping

Parse Example

;ct=40;title="Sensor Index", ;rt="temperature-c";if="sensor", ;rt="light-lux";if="sensor", ;anchor="/sensors/temp";rel="describedby", ;anchor="/sensors/temp";rel=”alternate”

"[{"href":"/sensors","ct":"40","title":"Sensor Index"}, {"href":"/sensors/temp","rt":"temperature-c","if":"sensor"}, {"href":"/sensors/light","rt":"light-lux","if":"sensor"}, {"href":"http://www.example.com/sensors/t123","anchor":"/sensors/temp","rel":"describedby"}, {"href":"/t","anchor":"/sensors/temp","rel":"alternate"}] "

6.2.3 SPITFIRE: Semantic Web of Things From a view of the Semantic Web of Things we find SPITFIRE [63]. It is a European project that aims to integrate the current Internet with the embedded computing world. The discovery mechanism studied in this project for sensors and things in general is based on the definition of a specific ontology to describe those things. This is done by providing abstractions for things, fundamental services for search and annotation, as well as integrating sensors and things into the Linked Open Data (LOD) cloud which is an effort to link semantic data on the web. Specifically, it is based on Triple Spaces i.e. Resource Description Framework (RDF) Model and Syntax Specification to enhance the interaction with sensors through the web. The triples store server obtains the information through a crawler and is queried by SPARQL to obtain concrete information related to the sensor, but provides another level of abstraction. The main technique for machine-readable representations of knowledge on the web is RDF, which represents knowledge as triples (subject, predicate, object). A set of triples forms a graph where subjects and objects are vertices and predicates are edges. From the graph formed by these triples, one can infer information by exploiting the knowledge that is-in is a transitive property from a RDF graph. It is imperative to use non-ambiguous identifiers for subjects, predicates and objects to guarantee uniqueness on an Internet scale, which is achieved by encoding them as URIs. An example of triples could be expressed as follows: A Subject: http://example.com/sensors/sensor3 A Predicate: http://www.loa-cnr.it/ontologies/DUL.owl#hasLocation An Object: http://example.com/parkingSpot/spot41 Ontologies play an important role in defining the URIs for a specific application domain and their relationship to each other as they “standardize” agreed conceptual knowledge. Assuming that sensors are described by such RDF triples, a search service and find sensor are based on meta-data such as sensor type, location or accuracy. Queries can be expressed in SPARQL, “similar” to SQL and provides a powerful way to search knowledge between RDF

63

D3.1 Open Service Layer triples. An example query with RDF and SPARQL is the presented in the following tables. The focus is on finding subjects observing the occupancy of parking places in Berlin. Information about these parking spots is stored in triples, and a simple SPARQL query allows to find the free spots near a certain location.

Table 24: Example of SPARQL query

SELECT COUNT(DISTINCT ?node) as ?spots WHERE { ?node a ssn:Sensor ; ssn:observes ex:Occupancy ; dul:hasLocation ?spot . ?spot a ex:ParkingSpot ; dul:hasLocation dbpedia:Berlin . }

For the semantic representation with RDF, SSN-X3G ontology [70] for sensor networks is being considered which also allows the retrieval, creation, modification or deletion of resources in the RDF graphs. This representation of the knowledge is based on a common ontology which is shared by all the entities involved in the communication. Finally, in addition to SPARQL, another technique has been considered to take account of the fact that sensors (and things in general) change frequently. This additional mechanism is based on heuristics to efficiently identify entities that are likely to match a given search. It employs prediction models to compute the probability that the current high-level state of a semantic entity matches the value specified in the query [71]. Thus, when a search is performed, the search engine executes the indexed prediction models to obtain the matching probability without communicating with the virtual sensor. This approach is integrated into SPARQL, so it may directly use the query sensor data but the “ORDER BY” clause should be used to sort the results from high to low probability.

6.2.4 EXI: Efficient XMl Interchange The Efficient XML Interchange (EXI) format is a very compact, high performance XML binary representation that significantly reduces bandwidth requirements without compromising efficient use of other resources such as code size, battery life, processing power and memory. EXI uses a grammar-driven approach that achieves very efficient encodings using a straightforward encoding algorithm and a small set of datatype representations. It is possible to use available schema information to improve compactness and performance, but does not depend on accurate, complete or current schemas to work. This is also interesting since current digital signature techniques from XML for EXI can be re-used. Some related works relevant to EXI can be found in the IoT@Work project, where they are using EXI to represent the capabilities from a sensor to access to another one. Further details about this are presented in the Privacy, Security and Access Control section of this document.

64

D3.1 Open Service Layer 6.2.5 oBIX: Open Building Information Xchange oBIX: Open Building Information Xchange is a specification published by Organization for the Advancement of Structured Information Standards (OASIS) in December 2006. This platform-independent technology is designed to provide M2M communications between embedded software systems over existing networks using standard technologies such as XML and HTTP. oBIX is based on service-oriented client/server architecture and defines only three request/response services used to read and manipulate data or to invoke operations. Each service response is an oBIX XML document that contains the requested information or the result of the service. The implementation of these three request/response services is called protocol binding. There are two different protocol bindings specified by the oBiX standard. The first protocol is the HTTP binding, which simply maps oBIX request to HTTP methods. The second protocol binding is the SOAP binding that maps a SOAP operation to each of the three oBIX requests. A fundamental element in the oBIX specification is the concise but extensible object model. These objects are described by attributes, called facets. Objects are identified by a name, a URL or both. Each object can contain other objects and the object model can be extended by a mechanism called contracts. The contract is used to define new types but provides a possibility of specifying default values. The second essential part of oBIX specification is the simple XML syntax to represent the object model. Basically each oBIX objects maps to exactly one XML element. Sub-objects result in the nesting of XML elements. The following example shows the most current load reading from the first floor of Mandat International in Geneva.

Table 25: Example of using oBIX to read a value

Mandat_International_Hall.Floor1.load

65

D3.1 Open Service Layer

6.3 Comparative table between Data Exchange Technologies on Internet of Things The following table shows a comparison between some characteristics about the different Data Exchange Technologies used currently in several Internet of Things projects.

Table 26: Data exchange technologies for the Internet of Things

Data Exchange Technologies for the Internet of Things

Features IPSO IPSO COAP Weblinks DNS- RDF EXI oBIX SD Text Plain SenML+JSON RD

Format Text-plain RFC5988/JSON RFC5988 RFC5988/JSON DNS XML Binary XML XML

Wide extended YES YES YES NO YES NO NO YES

Complex parser NO YES NO YES NO YES YES YES embedded (this requires high memory capabilities)

Search Engine CoAP RD JSON-based CoAP RD mDNS mDNS SPARQL XML- XML- such as based based searchelastic

Communications Low Medium Low Medium Low High High Very Overload High

Semantic description NO (Very High Medium High Medium Very Very Very capabilities Low) High High High

Memory Requirements Low Medium Low Medium Low Very High Very High High

oBIX is the most relevant of all the presented solutions, since it is a very powerful protocol, commonly used in Building Automation Systems. oBIX offers a relevant alternative to BACnet/WS and other approaches to open the Building Automation protocols towards the WebServices. oBIX is based on HTTP and SOAP, therefore is highly interoperable and relevant for this work. The main problem is that oBIX is very heavy for constrained environments, and is based on SOAP and not available for CoAP. For that reason, an integration of oBIX over CoAP under the framework of the IoT6 project will be proposed. Further details can be found in the Deliverable D4.1. Regarding the context of the digcovery solution, the IPSO Alliance approaches will be considered in the beginning. IPSO alliance is considered the most suitable, since it is in line with the CoRE Working Group technologies and the industrial sector. SenML/JSON is the most adequate solution considered since it has higher capabilities to describe the native semantic from the resources and services. In addition, what needs to be taken into account is that the search engine used by digcovery (see next Section) is optimized to work with JSON descriptions.

66

D3.1 Open Service Layer

7 Search Engine: context awareness

This Section presents how the search engine used in digcovery handles the information received, as well as providing fast and customized searches with context awareness in terms of geo-location, domain, and application profiles. It presents a description of ElasticSearch, with some illustrative examples of use, and integration into the proposed architecture. 7.1 Elastic Search ElasticSearch is an open source search engine for distributed RESTFul-based architectures. The most relevant feature of ElasticSearch is its integration with digcovery. Other features include:  Ease of configuration, to minimize the launch of a search.  Architecture always designed with the distribution, scale a solution to allow a node to hundreds, offering high availability, supporting large amounts of data and short response times.  Searches are in real time.  Exposes a RESTful HTTP API type, and uses both JSON to format the requests and responses.  It can be managed using a native API for Java.  It is free of data schema, i.e., an explicit definition of the schema (resulting in ease of configuration) is not required.  Supports multitenancy, including multiple indexes, and multiple types (i.e. it can be extended to manage different types of content in a Content Management System).  It offers the ability to search on any combination of ACID properties of transactional systems, operations at the document level.  It is based on Apache Lucene.

7.1.1 Query DSL ElasticSearch provides a full Query DSL based on JSON to define queries. In general, there are basic queries such as term or prefix. There are also compound queries like the bool query. Queries can also have filters associated with them such as the filtered or constant score queries with specific filter queries. The Query DSL can be considered as an AST of queries. Certain queries can contain other queries (like the bool query) while others can contain filters (like the constant score), and some can contain both a query and a filter (like the filtered). Each of these can contain any query from the list of queries or any filter from the list of filters, resulting in the ability to build quite complex (and interesting) queries. Both queries and filters can be used in different APIs i.e. within a search query or as a features filter. This section explains the components (queries and filters) that can form the AST. Filters are very convenient since they perform an order of magnitude better than a plain query since no scoring is performed and they are automatically cached.

67

D3.1 Open Service Layer 7.1.2 Filters and Caching Filters can be an ideal candidate for caching. Caching the result of a filter does not require a large amount of memory, and will cause other queries executing against the same filter (same parameters) to be extremely fast. Some filters already produce a result that is easily cacheable, and the difference between caching and not caching them is in the act of placing the result in the cache or not. These filters which include the term, terms, prefix, and range filters are by default cached and are recommended to use (compared to the equivalent query version) when the same filter (same parameters) will be used across multiple different queries (for example, a range filter with age higher than 10). Other filters, usually already working with the field data loaded into memory are not cached by default. Those filters are already very fast, and the process of caching them requires extra processing in order to allow the filter result to be used with different queries than the one executed. These filters, including the geo, numeric range, and script filters, are not cached by default. The last types of filters are those working with other filters. The ‘and’, ‘not’ and ‘or’ filters are not cached as they basically just manipulate the internal filters.

7.1.3 Mapping Types Mapping types are a way to try and divide the documents indexed into the same index into logical groups i.e. tables in a database. Though there is a separation between types, it’s not a full separation (all end up as a document within the same Lucene index). Field names with the same name across types are highly recommended to have the same type and same mapping characteristics (analysis settings for example). There is an effort to allow to “choose” directly which field to use by using type prefix (my_type.my_field). In practice though, this restriction is almost never an issue. The field name usually ends up being a good indication of its “typeness” (e.g. “first_name” will always be a string). Note that this does not apply to the cross index case.

68

D3.1 Open Service Layer

7.1.4 Indexing Data Example This presents an example for indexing some data to our ElasticSearch instance.

Curl -XPUT 'http://localhost:9200/blog/user/dilbert' -d '{ "name" : "Dilbert Brown" }' curl -XPUT 'http://localhost:9200/blog/post/1' -d ' { "user": "dilbert", "postDate": "2011-12-15", "body": "Search is hard. Search should be easy." , "title": "On search" }' curl -XPUT 'http://localhost:9200/blog/post/2' -d ' { "user": "dilbert", "postDate": "2011-12-12", "body": "Distribution is hard. Distribution should be easy." , "title": "On distributed search" }' curl -XPUT 'http://localhost:9200/blog/post/3' -d ' { "user": "dilbert", "postDate": "2011-12-10", "body": "Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat" , "title": "Lorem ipsum" }'

7.1.5 Searching Data Example Find all blog posts by Dilbert: curl 'http://localhost:9200/blog/post/_search?q=user:dilbert&pretty=true'

69

D3.1 Open Service Layer Response:

{ "took" : 85, "timed_out" : false, "_shards" : { "total" : 5, "successful" : 5, "failed" : 0 }, "hits" : { "total" : 3, "max_score" : 1.0, "hits" : [ { "_index" : "blog", "_type" : "post", "_id" : "1", "_score" : 1.0, "_source" : { "user": "dilbert", "postDate": "2011-12-15", "body": "Search is hard. Search should be easy." , "title": "On search" } }, { "_index" : "blog", "_type" : "post", "_id" : "2", "_score" : 0.30685282, "_source" : { "user": "dilbert", "postDate": "2011-12-12", "body": "Distribution is hard. Distribution should be easy." , "title": "On distributed search" } }, { "_index" : "blog", "_type" : "post", "_id" : "3", "_score" : 0.30685282, "_source" : { "user": "dilbert", "postDate": "2011-12-10", "body": "Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat" , "title": "Lorem ipsum" } } ] }

70

D3.1 Open Service Layer Searching Data on POST Example curl -XGET 'http://localhost:9200/blog/_search?pretty=true' -d ' { "query" : { "range" : { "postDate" : { "from" : "2011-12-10", "to" : "2011-12-12" } } } }'

7.1.6 ElacticSearch in Digcovery ElasticSearch is an interesting tool to store and retrieve stored data quickly and with the possibility of getting JSON operations through an "easy" integration with the digcovery system and providing a fast database with multiple options of access, as well as a search engine based on RESTful. ElasticSearch will be used for digcovery in order to, on the one hand, collect not organized data from the different digrectories and make feasible its look-up and filtering based on services and resource type, and on the other hand, for applications such as the digcovery mobile mentioned in the section 4.3.3.2 offers context awareness solutions based on geo- location.

7.1.6.1 Resource and Service Type look-up for digcovery with ElasticSearch Digcovery presents an elastic architecture; this means that several digrectories will be integrated with very different types of resources and services, in different locations. But, even when this integration is flexible and elastic, this needs to provide a mechanism to carry out global organized look-ups, i.e. look-ups or queries filtered by some attribute. Current solutions for discovery in the Internet of Things such as the CoAP Discovery, described in the RFC6690 [75] , define the look-up/query based on resource types, i.e. this allows to filter the resources to be discovered specifying a resource type (rt) in the query. Similar mechanisms are supported into digcovery through this elastic architecture. ElasticSearch presents the architecture and mechanisms required to manage a distributed and heterogeneous of the organization, being able in an optimal time to get organized results filtered by resource type (e.g. light). Thereby, this offers the same potential of CoAP Discovery without the limitations of discovery in local domain (multicast-based) or to the resources of a centralized server (CoAP Resource Directory).

7.1.6.2 Geo-location for digcovery with ElasticSearch In addition of the filtering by resource type, a way to make easy the discovery for the end- used, in environments such as smart cities, is to discover services that are close to you. The meaning of close is very different from the networking and physical point of view, since close in networking means under a common domain, over a link-local, which is usually mapped to a specific location, but else when you extend it through VLANs and tunnels, this lost the meaning of close in terms of distance. At the same time, with the proliferation of Wireless networks such as 3G, LTE, Wi-Fi and Wimax, you can be located next to one device, but belongs to domains totally different. For that reason, when it is addressing a discovery solution for domains such as Smart Cities, it

71

D3.1 Open Service Layer is required to consider a global service discovery, where the integration of multiple-domains is independent of the location, i.e. it can be integrated multiple domains from a similar location, and then apply close concepts, and neighborhood concepts through the interaction with the environment, i.e. physical interaction with tags and QR codes, and mainly with the context awareness reached by the geo-location of the devices over latitude longitude coordinates. For that reason, a search engine optimized to offer a geo-location based search and resource types-based search even when this is managing multiple domains with heterogeneous resources and services, and where multiple types of resources and services will be published will be stored without a class, resource type or location organization.

The following example presents an example of query for geo-location in the ElasticSearch module from digcovery.

Geo-location based Query/Look-up in ElasticSearch:

'{ "query" : { "filtered" : { "query" : { "range" : { "longitude" : { "from" : "37.997", "to" : "37.999" } } }, "filter" : { "range" : { "latitude" : { "from" : "-1.142", "to" : "-1.140" } } } } } }'

72

D3.1 Open Service Layer

8 Communications interfaces and management functions

Figure 16 presents the interaction among the different components of the architecture. The core of the architecture is the digcovery. It is a central system which can be extended in the cloud to manage the different domains and subnets; this can be seen as a discovery platform such as Google, Yahoo or but for things. It offers a WebInterface which is also accessible through the DNS protocol serving as a main protocol. This also offers a set of interfaces to interoperate with other clients, who are not using DNS, such as CoAP Resource Directory, Retrieve/Locate Service/Update Service (RLUS) over UDP, and finally GSON used to interact with the digrectories. Digrectories are the components deployed locally in each domain to handle the domains and subnets. These digrectories are focused on connecting through the appropriate connector the devices, things, books or objects connected or located in the domain. Specifically, domains are considered when they are composed of: 1. IPv6 devices, which can be managed through mDNS. 2. 6LoWPAN and GLoWBAL IPv6 devices, which are managed through the proposed lightweight multicast DNS (lmDNS). 3. RFID tags, which are managed through the EPCIS. 4. Legacy devices, which are managed through the proprietary/legacy protocol such as CAN, X10, EIB/KNX and BACNet. Once the digrectories have reached the communication with the services from the IPv6, 6LoWPAN, GloWBAL IPv6, EPCIS or legacy technology, it maps it to DNS through the built drivers (see Deliverable D3.3). These drivers translate from the original protocol to a unified DNS-based protocol to make it a homogenous face to the digcovery core and the clients connected to digcovery. The clients are highly heterogeneous depending on the technology used, as are the digrectories. DNS as the main technology from IPv6 for discovery is proposed for the clients, but CoAP Resource Directory will also be supported since it is the main technology for discovery from the CoRE working group. Other specific interfaces for management based on RLUS over CoAP, in order to interoperate with other applications and platforms will also be supported. Although three different interfaces are proposed, DNS is the core technology. For that reason, RLUS and CoAP Resource Directory are wrapped around the DNS functions and API through the presented wrappers in the following figure. Finally, a M2M platform has already been integrated, and the integration of a Global Sensor Network (GSN) platform is taking place through a fruitful collaboration with the OpenIoT project. For the integration with GSN, a plugin will be developed to interoperate with digcovery through a WebServices based interface.

The following subsections describe the interfaces presented in Figure 16: 1. RLUS is used for the management interface between other platforms and applications with the digcovery. 2. JSON is used for the intra-communication between the digrectories and the digcovery

73

D3.1 Open Service Layer in order to inform about the resources accessible and keep the tables synchronized. Note that DNS is encapsulated in the GSON objects. 3. CoAP Resource Directory is implemented to maintain the compatibility with the solutions that are not supporting DNS protocol. It is common that Smart Things do not support DNS, in order to reduce the footprint, i.e. RAM and ROM required. 4. DNS is the main protocol from the architecture, and it is based on the common DNS- SD records format, and mDNS messages. 5. GSN: an interface has been implemented to enable it to interoperate with the digcovery plugin for GSN.

Figure 16: General APIs view

74

D3.1 Open Service Layer

8.1 RLUS management interface over UDP RLUS is a Service Functional Model, which comes from the defined commands and operations. This defines the following interface (see http://www.omg.org/spec/RLUS/1.0.1/)

Message Description Example Query Example Reply “service1;service2;…” Retrieve Get the services provided by a retrieve( or name) specific node, tag, sensor, device, i.e. thing.

Locate Search engine command. This locate(regular_exp, data string depending on is used to locate the domains search_code) the type of search where are located the services and resources.

Update This is used to update the uptservice(ipv6, field, ACK or DENY Services features and status of a newdata) resource

The RLUS specification provides a flexible means for querying data. The RLUS messages used by the digrectory are profiled by so called semantic signifiers which define the syntax and semantics of the data that is exchanged through the message. The construct of the semantic signifier addresses these needs and, with the help of semantic signifiers, statements can be made about the contents and its structures that should be communicated. This is similar to a service description via WSDL documents. Figuratively, the underlying information model of the (resources description) data and services is described by semantic signifiers. Logical parts of a semantic signifier are a name, a description, and a normative data structure that describes instances of the semantic signifier. This might include implementation guidelines, schemas and specifications for validation (e.g., resources description format, EXI document structures etc...). As the term Semantic Signifier implies, a specific semantic description on the storage backend of the digcovery platform is addressed. Primarily, services data in a separate context is described. The filling of a semantic signifier (i.e. its instantiation) with (services and resources) data relates to the ability of the system to interpret data and build relations with each other. Generally, two flavours are imaginable; i.e. content-aware and content-agnostic. These two extremes identify the extent to which an RLUS implementation is capable of interpreting (resources and services) data. The context-agnostic is the current solution based on DNS and is the initial approach. The content-aware approach will be future work in which ontologies such as SSN ontology-based applications (see http://www.w3.org/community/ssn-cg/wiki/SSN_Applications) will be considered. The previously mentioned options are wrapped over the previously mentioned technologies, i.e. CoAP, RLUS (similar for HL7 with similar operations), and the most relevant can be mapped to the basic messages available from mDNS/DNS-SD, i.e. DNS.

75

D3.1 Open Service Layer It is important to note that some operations such as "DELETE" have not been defined yet, since they are not available in DNS and RLUS. The DELETE operation is found in CoAP, and consequently in solutions such as CoAP RD from CORE, but in order to make it also compatible and homogenous for DNS and RLUS, it will not be explicitly defined yet for this project. Therefore, the records’ lifetime is managed through the special messaged "Keep Alive" and "Pledge” options. These options and messages will "refresh" the records and will be deleted after a period without activity over the mentioned record. The idea to remove the "DELETE" operation will remove multiple security attacks and vulnerabilities, in order to ensure the security of the resources.

8.2 JSON – Java interface between digcovery and the digrectories This is mainly used for the exchange between the digrectories and the digcovery. Therefore, it will be totally dependent on the Java Class used for the description of the DNS records. It will be detailed in deliverable D3.3.

8.3 CoAP Resource Directory CoAP is based on the standard from the IETF CoRE Working Group. A CoAP resource is a functionality offered/available within a server (e.g. sensor node) that is expressed in the form of a path, for instance “/TempC”.

Scheme Authority Path Query

“coap:” “//”host[“:”port] path [“?”query]

* host may be a IP-literal or IP address. When IP-literal a name resolution service such as DNS is required

Resource Discovery on a server(s) is performed as:

GET /.well-known/core?rt=TempC

Resource Directories can be used to store registers of the resources offered within the LoWPAN at a single entity.

8.4 GSN interface GSN is a middleware focused on the data management of the Smart Things. This presents a centralized architecture with several connectors to Smart Things through WebServices such as RestFul and SOAP. GSN was not supporting CoAP at the beginning since it is a platform developed before the CoAP protocol was defined. However, it has been extended with CoAP, as part of the collaboration with the OpenIoT project, and a plugin to interoperate with the digcovery will be developed.

76

D3.1 Open Service Layer The interface defined is composed of the following methods/operations.

Message Description Example Query Example Reply

RegisterComponent This is the functionality registerComponent(Service); ACK or DENY to export from Service{ digcovery to GSN Name, port, ip, ptrList, txtList, through the GPSPosition WebServices interface. }

ExportEXI This gets the EXI getExiDescription(Ipv6,port); Service description of the Sensor.

ConnectRAWComponent This opens a connectRAWmode(Ipv6,port); ACK or connection with the DENY sensor through UDP.

ConnectCoAPComponent This opens a sendCoAPGET(Ipv6,port,par ACK with connection with the ams); data or sensor through CoAP DENY

ListenComponent This listen to the getStatus(Service or Sensor current status of the {Ipv6,port}); current sensor, we are defining status in all the sensors the status attributes as a basis attribute to know if it is alive, current value, etc.

ObserveComponent This allows to apply setObserver(Service or ACK or the conditional {Ipv6,port}); DENY Observe protocol over one sensor in order to only receive attributes under specific conditions.

GetAttribute This allows an getAttrValue(Service); Value attribute to be asked anytime.

77

D3.1 Open Service Layer

GetEntities This allows through the digIt(SearchType, Service or digcovery to get RegularExpression); another value. entities under some Depends of search engineering SearchType concepts, such as lights, temp sensors, books. It can be focused on location, technology, type of device, etc. This is following the seam search engineering that the digcovery core.

Integrity and control management messages between GSN and Digcovery

DisconnectionNotify This notifies from the disconnectionNotify(Service); ACK or digcovery to GSN, if it DENY has realized that the sensor is no longer available.

NewSensorNotify This notifies a new newServiceNotify(Service); ACK or sensor of the previously DENY subscribed or queried

AlternativeSensorNotify This notifies an altSensorNotify(Service); ACK or alternative sensor to be DENY used when a sensor connection has been lost or it has been disconnected.

DeregisterComponent This unloads or deregisterComponent(Service) destroys the ; registration with GSN of the sensor and allows digcovery to remove from the list of sensors used by a GSN client. Since digcovery takes care of the sensors used by each GSN client, in order to notify the mentioned events such as disconnection, similar sensors, new sensors etc...

78

D3.1 Open Service Layer

9 Proposed discovery mechanism (digcovery protocol)

The discovery mechanisms associated to the proposed digcovery protocol are equivalents in a general level to the found in other discovery mechanisms for the Internet of Things such as CoRE Service Discovery (RFC6690) and mDNS. The main differences between the proposed discovery mechanism and the existing ones are regarding the interaction between the architecture elements, such as digcovery and digrectory, and also in terms of management of the resources from other directories for resources which are not IP-enabled, such as RFID tags (EPCIS) and legacy technologies.

9.1 Discovery phases The main phases for service/resource discovery solutions are: - Publication or Registration (in case of directory) - Discovery - Resolution

Specifically, these phases differ depending on whether or not the solution involves infrastructure (infrastructure in the service discovery context means mainly the existence of a directory). The following different solutions exist: 1. Not directory – ad-hoc or multicast discovery (it is the defined by the protocol mDNS in ZeroConf IETF WG or CoAP Discovery in the CoRE IETF WG) a. Publication: Announcement Advertisement of its service and resources to its neighborhood. b. Discovery/Resolution: Query Look-up of some resource or service to its neighbouhoods. They are defined and solved both in the context of CoAP and in the context of IPv6 with CoAP link format and mDNS respectively.

2. Local/site directory (it is the based on directory implementations such as DNS-SD in ZeroConf IETF WG or CoAP RD in CoRE IETF WG) c. Publication: Announcement. Similar to the not directory solution. d. Registration: Registration / Update / Remove Management of the directory records. This updates to the specific directory from the domain, where is located the service and resources. DNS-SD defines a structure for the directory but not this management protocol, otherwise CoAP RD is defining this management messages. e. Discovery: Browsing or Service Enumeration List all the services under some criteria For DNS-SD and mDNS the search engine is built over the PTR records. Note that the instances of the SRV records are PTR records. This query is mainly

79

D3.1 Open Service Layer based on subtypes e.g. _light._coap.udp. CoAP, it is based on sending queries to /.well-know/core to get a list of all the services, or sending a query to /.well- know/core?”query”, in case of that some query want to be defined, e.g. /.well- know/core?rt=light to get lights as the example presented for DNS-SD. f. Resolution: Look Up / Query Once the appropriated instance of the record is chosen (i.e. PTR for DNS, this asks for the addressing (A/AAAA), service description (SRV), and resources/attributes descriptions (TXT). i. Look UP: Get more information of a service instance (remember that instances in DNS-SD are based on PTR records) like the complete service name + its hostname + port (SRV entry), and then TXT entries for extended details. ii. Query: Get the IP address of a resource (hostname) (A and AAAA records in DNS-SD). Note that CoAP RD and CoAP discovery has no resolution step, since the browsing directly offers the resources. Therefore, this gets the service/resource description in only one step. The following sub-sections present the most relevant phases for digcovery architecture: 1. How to register smart things through the presented lmDNS protocol. 2. How to browse devices in general. 3. How is working internally for browsing special resources integrated through digrectories for non-IP ready services and resources such as the EPCIS for RFID.

A more detailed description of the implementation of the different phases will be provided in the Deliverable D3.3 from WP3.

80

D3.1 Open Service Layer

9.2 Registration The registration procedure for the devices located in different clusters and DNS server discovery is shown in sequence diagram in Figure 17.

Figure 17: Registration of devices and DNS domains

9.3 Resource and service discovery Figure 17 presents the initial phase of the registration process, which is triggered by a request from the DNS-SD (digrectory) with the main goal of reducing the power consumption from the smart things. In addition to the registration process, the smart thing will be refreshed periodically regarding its entry into the digrectory, in order to ensure the freshness and integrity of the digrectory information. The rest of the protocol is based on a light version of DNS, which has been denominated

81

D3.1 Open Service Layer lmDNS. lmDNS is presented in the following sections. Resource and service discovery procedures are shown in the sequence diagram in Figure 18.

Figure 18: Resource and service discovery

Discovery is managed by the digcovery system. Digcovery can be accessed via a WebPortal (located at www.digcovery.net), DNS protocol, CoAP RD protocol, and the discovery API, as presented in Figure 16. The sequence begins with the digcovery receiving a global DNS query (queries are expressed with regular expressions). Then, digcovery looks-up in all the domains it is located an object with that queried attributes or features. Digcovery then offers the list of domains where l the services/resources are located and queried based on the attributed and features mentioned. The client queries directly through DNS to each of the digrectories (local resource management system), and this offers an extended information about the asked service, i.e. TXT records with the extended information, SRV record with the service description, and AAAA record with the IPv6 address.

9.4 Extending discovery to non-IP clusters In addition to this example query, more complex queries for the integration of legacy technologies such as presented in Deliverable D1.2 for other clusters from non-IP things can be considered. An example of this extended query is related with the integration of RFID clusters. Specifically, the RFID integration into the IoT6 project is carried out through EPCIS system. For that reason, it is required to integrate DNS system with the EPCIS query protocols (see Deliverable D6.1 with the information from the EPCIS API).

82

D3.1 Open Service Layer

Figure 19: EPCIS Query through digcovery system

The discovery mechanism for non-IP devices presented in the Figure 19, is similar to what was presented in the Figure 18; the main difference being that the digretory (local resource directory) which is connected to the EPCIS domain requires the adaption of the EPCIS API to DNS. For that reason, an adaptation from the queries and results of the EPCIS to DNS is required. Once, the tag has been discovered there are two feasible ways to collect the other attributes, features and extended information from the tags. First, the EPCIS-based query API (represented in blue colour), and continues using the DNS-based query which requires the digirectory to be involved in order to carry out the adaptations. When DNS is used for discovering EPC tags, a mapping between EPC and IPv6, such as the one presented in Deliverable D2.1 through the IPv6 addressing Proxy mechanism can be also considered.

83

D3.1 Open Service Layer

10 Privacy and access management

Openness and ubiquity features of the Internet presents different horizontal challenges to offer a suitable support for security, privacy and trust. In particular, access to the resources and services needs to be managed in order to satisfy the security and privacy requirements from the use cases. Since the Internet of Things will embrace any kind of device such as sensors and actuators; an unauthorized access to some resources and services could present critical situations in resources such as the sensors responsible for the safe operation of power plants, smart patient’s monitors at hospitals, energy management sensors at smart grids, and traffics sensors in Smart Cities and transportation systems. Controlled access to these types of resources is critical because of the very real potential for loss of life, massive environmental and infrastructure damage that could cause malicious operations. This can be found in the State of the Art several mechanisms and technologies to manage access control in order to mitigate the risks of unauthorized access to their data, resources, systems, and services. Access control mechanisms and models use different technologies and underlying infrastructure components depending on the degree of complexity, i.e. the grain level that access control wants to be carried. Nowadays, it can be found very sophisticated models, which expand and enhance earlier models. This sophistication level depends on the way that the access control is done; i.e. what is considered in order to make the decision if the access is granted or not. These models vary depending on the requirements from the communications and information systems infrastructure, organizational structures, technologies, requirements, technical capabilities, and the level for the relationships, i.e. local, federal, national or global. The deliverable D4.2 from the IoT-A project [72] carries out a survey about the different mechanisms, concepts and solutions for privacy and security in the resolution infrastructure. Specifically, this is presenting the common technologies for Authentication, Authorization, and Accounting (AAA) Servers such as SAML for Authentication, XACML for Authorization. In addition, it is presented the components for the management of credentials in high level and sophisticated architectures such as Identity Management architectures, and Trust & Reputation Architectures. These mentioned architectures and protocols are too heavy for constrained devices, and therefore, lightweight mechanisms to address the Identity Management in the Internet of Things Architecture for its proper integration into the Future Internet needs to be researched. This was concluded in the Session of Internet of Things and Future Internet Architecture during the Future Internet Assembly held in Aalborg [73]. In addition, lightweight versions of SAML and XACML over EXI in order to make it suitable for constrained devices also needs to be defined since the lightweight mappings will lead to major challenges associated with scalability, manageability, addressing/identity, and robustness. Some Resources Directories such as CoAP RD defines that the access control should be performed separately for the RD management and Lookup Function, i.e. as different end- points or attributes that may be authorized to register with an RD from those authorized to look-up end-points from the RD. Such access control should be performed at a fine-grained level as possible. For example access control for look-ups could be performed either at the domain, end-point or resource level.

84

D3.1 Open Service Layer This section is focused on addressing the privacy and access control requirements from the Open Service Layer, Service Discovery mechanism and Resource Directory in a safe and suitable way. For that purpose, we are focused on access control models based on approaches such as access control lists, which are commonly used in Wireless Sensor Networks. Higher level access control lists such as role-based access control, attribute-based access control, and policy-based access control are also discussed.

10.1 Access Control Lists The access control to the resources and services can be focused on different levels. The first level is from a sensor-level where the node does not connect with any device which is not sharing a key (credential-access control). The second level is when the address is not in its Access Control List. This kind of access control is offered natively by protocols such as IEEE 802.15.4. For this project, the focus is on the access control at the discovery and Open Service architecture level, where in order to maintain privacy, it will not publish certain services. Specifically, the mechanism to carry out the access control from the discovery architecture is based on the definition of the access control list during the commissioning of the devices. This is done to specify the discoverable services and the security mechanisms implemented for the respective services. This allows the devices only to announce determined services to their digrectories. Thereby, a distributed approach is reached where policy decisions are hosted in the digrectory. In order to make it more scalable, general policies for all the sensors of a specific family and all the same type of services can be defined. This can be useful for deployments where a semantic layer is defined as presented in Section 6. For example, a common service for status monitoring, denominated “status” which checks the status from the sensors, i.e. if it is online and if it is the last value can be defined. This service is useful as a callback when the clients are not bounding the service, and when they need to verify that the service is reachable. Therefore, since the way the current status service is defined is common in all the sensors and is accessible when a sensor is discovered, other services can be defined which need information indicating if the client is accessible or not.

10.2 Role Access Control mechanism Role-based Access Control (RBAC) is a higher level paradigm for Access Control. In RBAC, access to a resource is determined by the relationship between the client and the organization or owner in control of the resources and services. Role-based Access Control addresses some of the low scalability problems from the Access Control Lists model, and presents new and interesting opportunities without introducing an excessive overload. For example, one limitation of the previously presented model is that it treats every user as a distinct entity with distinct sets of permissions for each resource i.e. resource-focused. In addition, the largest single pitfall of the Access Control Lists model is that it has limited scalability, since for each separate resource (or group of resources) the access control needs to be specified by the resource’s owner. This leads to a centralized approach and complicates matters, since much coordination and planning has to be done to ensure that the correct people

85

D3.1 Open Service Layer have the correct access to the correct resources. Since Role-Based Access Control determines access based on roles, and since more than one client can have the same role (the role of software engineer, for example), Role-Based Access Control allows the grouping of individuals into categories of people who fulfill a particular role. This means that one set of access control permissions on a particular resource is useful for all the clients from that role. Clients can also be members of multiple groups (i.e. present different roles) creating hierarchies of permissions and inheritance, wherein more restrictive permissions override more general permissions. This approach is commonly used in Operating Systems.

10.3 Attribute Based Access Control Attribute Based Access Control (ABAC) is a more sophisticated access control mechanism, based on other attributed in addition to the role. In this model, the access control decisions are made based on a set of characteristics, or attributes, associated with the requester, the environment, and/or the resource itself. Each attribute is a discrete, distinct field that a policy decision point can compare against a set of values to determine whether or not to allow or deny access. The attributes do not necessarily need to be related to each other, and in fact, the attributes that go into making a decision can come from disparate, unrelated sources. This approach is similar to the XACML-based approach which is the language commonly used to define the attributes. However, as mentioned previously, this approach is too heavy for constrained devices. Therefore, Attribute Based Access Control could be defined over simpler data structures such as JSON, which is being extended into the Internet of Things protocols such as CoAP.

10.4 Other approaches Finally, more sophisticated approaches such as Policy-based Access Control, which is more focused for domains such as eGoverment and Identity Management architecture, where they organizations involved have some kind of policy and governance structure in place to ensure the successful execution of the organization’s mission, to mitigate risk, and to ensure accountability and compliance with relevant law and regulations can be considered. It is very common in public organizations, government-related bodies, banks, hospitals and critical infrastructures in general. Other approaches are defined for the presented Distributed Hash Tables, where protocols such as REsource LOcation And Discovery (RELOAD) Base Protocol defines a security model based on a certificate enrollment service that provides unique identities [74], but this is also out of the scope of the Internet of Things due to its complexity to manage credentials, and digital signatures.

10.5 Digcovery approach summary The privacy and access management for the digcovery architecture is based on the publication policies from the digrectories. A Smart Object Control List (SOCL) is defined for each device in the digrectory. SOCL defines exactly the services to be published. In other words, the digcovery acts just like a "firewall" between the device network and the outside world to the discovery phase. The SOCL can be created manually through the digcovery management interface based on

86

D3.1 Open Service Layer RLUS, but in order to make it more scalable, the option is offered to initialize it automatically. The automatic way to register services and build the SOCL will be through the use of specific meta-data attributes over JSON or TXT records from DNS. This is in case that JSON is not supported for indicating which roles can access that service, and whether it is public or not. The SOCL can only be modified through authenticated messages from the Smart Object or the digrectory administrator. General polices for families of sensors such as humidity and temperature, or actuators such as “open windows” will be defined. Thereby, the policies for groups of sensors, instead of individual sensors, can be managed. Common services for status monitoring and digcovery presentation such as the “status” service, which checks the sensor status, will be defined. This service is mainly a callback mechanism to check that the sensor is alive, and it is offering the kind of data required before carrying out all the bounding, or observation processes. Since the current status will be accessible when a sensor is discovered, other well-known services which depend on whether or not the client is accessible, can be defined. In addition to the role, additional attributes through the JSON or TXT records mechanisms can be considered. Finally, a requirement will be to build over one semantic description as presented in Section 6 in the well-known services, attributes and roles. Specifically, SenML/JSON will be used for this purpose, since it is compatible with IPSO interfaces from the CoAP protocol.

87

D3.1 Open Service Layer

11 Conclusions

The objective of this Deliverable was to describe the APIs and functions which will be offered by the service later defined in order to provide functions for look-up and discovery, context- awareness, resource repository and access control solutions, including privacy management. Specifically, this document has described an initial approach for an Open Service Layer proposed from the IoT6 point of view, which offers a global discovery and look-up platform based on IPv6 technologies such as DNS. This Deliverable has also described the APIs and functions being offered by the aforementioned Open Service Layer. The Deliverable also presents the interactions between local and global service discovery mechanisms that would fit in the IoT scenarios that integrates both local and global scopes to ease and homogenize the discovery task.

88

D3.1 Open Service Layer References

[1] L. Atzori, A. Iera, G. Morabito, "The Internet of Things: A survey". Computer Networks Vol. 54, No. 15, pp. 2787-2805, 2010. [2] J. Hui, and P. Thubert. “Compression Format for IPv6 Datagrams over IEEE 802.15.4- Based Network”. IETF 6LoWPAN Working Group, RFC6282, 2011. [3] A, J. Jara, M. A. Zamora, and A. Skarmeta, “GLoWBAL IP: an adaptive and transparent IPv6 integration in the Internet of Things”, Mobile Information Systems, “in press”, 2012. [4] Kerry Lynn, Jerry Martocci, Carl Neilson, Stuart Donaldson. “IPv6 over MS/TP Networks”, draft-ietf-6man-6lobac-01,6man working group, 2012. [5] Charles Frankston, BACNET discovery based on DNS, BACnet IT strawman proposal, 2009. [6] Z. Shelby, "Embedded web services," Wireless Communications, IEEE, Vol.17, No.6, pp. 52-57, doi: 10.1109/MWC.2010.5675778, December 2010 [7] W.K. Edwards, "Discovery systems in ubiquitous computing", Pervasive Computing, IEEE, Vol.5, No. 2, pp. 70- 77, doi: 10.1109/MPRV.2006.28, 2006. [8] S. Kiyomoto, and K. M. Martin. “Model for a Common Notion of Privacy Leakage on Public Database”. Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications (JoWUA), Vol. 2, No. 1, pp. 50-62, 2011. [9] S. Cheshire, M. Krochmal, “Multicast DNS”, http://tools.ietf.org/html/draft-cheshire- dnsext-multicastdns-15, December, 2011. [10] S. Cheshire, M. Krochmal, “DNS-Based Service Discovery”, http://tools.ietf.org/html/draft-cheshire-dnsext-dns-sd-10, February, 2011. [11] Z. Shelby, "CoRE Link Format", draft-ietf-core-link-format-06, IETF work in progress, June 2011. [12] Lynn, K. and Z. Shelby, "CoRE Link-Format to DNS-Based Service Discovery Mapping", draft-lynn-core-discovery-mapping-01 (work in progress), July 2011. [13] “Package avahi4j documentation” http://avahi4j.googlecode.com/svn- history/r19/avahi4j/www/api/avahi4j/package-summary.html [14] P. van der Stok, “CoRE Discovery, Naming, and Addressing”, draft-vanderstok-core-dna- 02, IETF CORE WG, work in progress, http://tools.ietf.org/html/draft-vanderstok-core-dna- 02, 2012. [15] E. Meshkova, J. Riihijärvi, M. Petrova, P. Mähönen, “A survey on resource discovery mechanisms, peer-to-peer and service discovery frameworks”, Computer Networks, Vol. 52, pp 2097-2128, 2008. [16] P. Mockapetris, “Domain Names - Implementation and Specification”, http://tools.ietf.org/html/rfc1035, 1987. [17] P. Vixie, “Extension Mechanisms for DNS (EDNS0)”, http://tools.ietf.org/html/rfc2671, 1999. [18] S. Sun, L. Lannom, B. Boesch, “Handle System Overview”, http://tools.ietf.org/html/rfc3650, 2003. [19] R. Kahn, R. Wilensky, “A framework for distributed digital object services”, International Journal on Digital Libraries, Vol. 6, N. 2, pp. 115-123, 2006.

89

D3.1 Open Service Layer [20] L. Coetzee1, L. Butgereit, A. Smith, “Handle System Integration as an Enabler in an Internet of Things Smart Environment”, CSIR Meraka Institute, South Africa, pp. 1—11, May 2012. [21] L. Butgereit, L. Coetzee, “Beachcomber: Linking the 'Internet of Things' to the 'Internet of People.'”, in IST-Africa Conference Proceedings, Gaberone, Botswana, P. Cunningham & M. Cunningham (Eds.), 2011. [22] L. Coetzee, “ThingMemory”, http://ioteg.meraka.csir.co.za/ioteg/contentView.seam?contentId=64, 2011. [23] H. Balakrishnan, M. Kaashoek, D. Karger, R. Morris, I. Stoica, “Looking up data in P2P systems”, Communications of the ACM 46 (2) (2003) 43–48. [24] K. Aberer, P. Cudré-Mauroux, A. Datta, Z. Despotovic, M. Hauswirth, M. Punceva, R. Schmidt, “P-Grid: a self-organizing structured P2P system”, SIGMOD Rec., vol. 32, no. 3, September 2003, pp. 29–33. [25] D. Elenius, M. Ingmarsson, “Ontology-based service discovery in p2p networks”, in Proceedings of the MobiQuitous' 04 Workshop on Peer-to-Peer Knowledge Management (P2PKM 2004), 2004. [26] I. Stoica, R. Morris, D. Karger, M. Kaashoek, H. Balakrishnan, “Chord: A scalable peer- to-peer lookup service for internet applications”, in: Proceedings of the 2001 SIGCOMM Conference, San Diego, CA, USA, August 2001, pp. 149–160. [27] D. Karger, E. Lehman, T. Leighton, R. Panigrahy, M. Levine, D. Lewin, “Consistent hashing and random trees: distributed caching protocols for relieving hot spots on the World Wide Web”, in: Proceedings of the Twenty-Ninth Annual ACM Symposium on Theory of Computing, El Paso, Texas, United States, May 1997, pp. 654–663. [28] F. Dabek, J. Li, E. Sit, J. Robertson, M. Kaashoek, R. Morris, “Designing a DHT for low latency and high throughput”, in: Proceedings of 1st Symposium on Networked Systems Design and Implementation (NSDI), San Francisco, California, USA, March 2004, pp. 85–98. [29] P. Flocchini, A. Nayak, M. Xie, “Enhancing peer-to-peer systems through redundancy”, IEEE Journal on Selected Areas in Communications 25 (1) (2007) 15–24. [30] R. Cox, A. Muthitacharoen, R. Morris, “Serving DNS using a Peer-to-Peer Lookup Service”, in: Proceedings of the 1st International Workshop on Peer-to-Peer Systems (IPTPS’02), Cambridge, MA, USA, March 2002, pp. 155–165. [31] F. Dabek, M. F. Kaashoek, D. Karger, R. Morris, I. Stoica, “Wide-area cooperative storage with cfs”, in: Proceedings of the Eighteenth ACM Symposium on Operating Systems Principles, Banff, Alberta, Canada, October 2001, pp. 202–215. [32] B. Zhao, L. Huang, J. Stribling, S. Rhea, A. Joseph, J. Kubiatowicz, “Tapestry: a resilient global-scale overlay for service deployment”, IEEE Journal on Selected Areas in Communications 22, pp. 41–53, 2004. [33] C. Plaxton, R. Rajaraman, A.W. Richa, “Accessing nearby copies of replicated objects in a distributed environment”, in: Proceedings of the Ninth Annual ACM Symposium on Parallel Algorithms and Architectures, Newport, Rhode Island, USA, June 1997, pp. 311–320. [34] K. Aberer, P. Cudre-Mauroux, M. Hauswirth, T. Van Pelt, “GridVine: Building Internet- scale semantic overlay networks”, in International Semantic Web Conference, Hiroshima, Japan, November 2004, pp. 36–44. [35] A. Rowstron, P. Druschel, “Pastry: scalable, decentralized object location and routing for large-scale peer-to-peer systems”, in: Proceedings of IFIP/ACM International Conference on

90

D3.1 Open Service Layer Distributed Systems Platforms (Middleware), Heidelberg, Germany, November 2001, pp. 329–350. [36] K. Gummadi, R. Gummadi, S. Gribble, S. Ratnasamy, S. Shenker, I. Stoica, “The impact of DHT routing geometry on resilience and proximity”, in: Proceedings of SIGCOMM 2003, Karlsruhe, Germany, August 2003, pp. 381–394. [37] P. Maymounkov and D. Mazieres, Kademlia: “A peer-to-peer information system based on the XOR metric”, in: Proceedings of the 1st International Workshop on Peer-to-Peer Systems (IPTPS’02) 258, Cambridge, MA, USA, March 2002, p. 263. [38] S. Saroiu, P.K. Gummadi, S.D. Gribble, “A measurement study of peer-to-peer file sharing systems”, in: Proceedings of Multimedia Computing and Networking 2002 (MMCN’02), San Jose, CA, USA, January 2002. [39] S. Ratnasamy, P. Francis, M. Handley, R. Karp, S. Schenker, “A scalable content- addressable network”, in: Proceedings of the ACM SIGCOMM, San Diego, CA, USA, August 2001, pp. 161–172. [40] A. Adya, W.J. Bolosky, M. Castro, G. Cermak, R. Chaiken, J.R. Douceur, J. Howell, J.R. Lorch, M. Theimer, R.P. Wattenhofer, “Farsite: federated, available, and reliable storage for an incompletely trusted environment”, SIGOPS Oper. Syst. Rev. 36 (SI) (2002) 1–14. [41] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski, P. Eaton, D. Geels, R. Gummadi, S. Rhea, H. Weatherspoon, C. Wells, B. Zhao, “Oceanstore: an architecture for global-scale persistent storage”, in: Proceedings of the Ninth International Conference on Architectural Support for Programming Languages and Operating Systems, Cambridge, Massachusetts, United States, November 2000, pp. 190–201. [42] G. Klyne, J. J. Carroll, “Resource Description Framework (RDF): Concepts and Abstract Syntax”, 2004. http://www.w3.org/TR/rdf-concepts/. [43] E. Prud’hommeaux, A. Seaborne, “SPARQL Query Language for RDF”, http://www.w3.org/TR/rdf-sparql-query/, 2008. [44] D. L. McGuinness, F. van Harmelen, “OWL Web Ontology Language Overview”, http://www.w3.org/TR/owl-features/, 2004. [45] F. Baader, D. Calvanese, D. McGuinness, D. Nardi, P. Patel-Schneider (Eds.), “The Description Logic Handbook: Theory, Implementation and Applications”, Cambridge University Press, 2003. [46] E. Sirin, B. Parsia, B. Grau, A. Kalyanpur, Y. Katz, “Pellet: a practical OWL-DL reasoner, Web Semantics: Science, Services and Agents on the World Wide Web 5, 2007, pp. 51-53. [47] M. Cai, M. Frank, B. Yan, R. MacGregor, “A subscribable peer-to-peer RDF repository for distributed metadata management”, in Journal of Web Semantics, Vol. 2, No. 2, pp. 109— 130, 2004. [48] M. Cai, M. Frank, J. Chen, P. Szekely, “MAAN: a multi-attribute addressable network for grid information services”, in Proceedings of the 4th International Workshop on Grid Computing, 2003. [49] A. Seaborne, “RDQL - A Query Language for RDF”, http://www.w3.org/Submission/RDQL/, 2004. [50] K. Arabshian,H. Schulzrinne, “An ontology-based hierarchical peer-to-peer global service discovery system”, Journal of Ubiquitous Computing and Intelligence, Vol. 1, No. 2,

91

D3.1 Open Service Layer pp. 133—144, 2007. [51] L. Gong, “JXTA: A network programming environment”, in IEEE Internet Computing, Vol. 5, No. 3, pp. 88—95, 2001. [52] Y. Lafon et al., “Simple Object Access Protocol”, http://www.w3.org/TR/soap, 2007. [53] B. Emerson, “M2M: the internet of 50 billion devices”, Win-Win, Editorial: Huawei, January 2010. [54] Internet-of-Things Architecture, IoT-A, “Project Deliverable D1.2 – Initial Architecture Reference Model for IoT”,Joachim W. Walewski (Ed.), 2011. [55] Krco, S. and Z. Shelby, "CoRE Resource Directory", draft-shelby-core-resource- directory-02 (work in progress), October 2011. [56] FI-WARE Internet of Things (IoT) Services Enablement, retrieved at http://forge.fi- ware.eu/plugins/mediawiki/wiki/fiware/index.php (accessed June 7, 2012.) [57] Matthias Kovatsch, “Demo Abstract: Human-CoAP Interaction with Copper”, Proceedings of the 7th IEEE International Conference on Distributed Computing in Sensor Systems (DCOSS 2011). Barcelona, Spain, June 2011.BAC [58] Vial, M., "CoRE Mirror Proxy", draft-vial-core-mirror-proxy-00 (work in progress), March 2012. [59] P. van der Stok, K. Lynn, A. Brandt. “CoRE Discovery, Naming, and Addressing”, draft- vanderstok-core-dna-01, (work in progress), March 2012. [60] Shelby, Z. and M. Vial, "CoRE Interfaces", draft-shelby-core-interfaces-02 (work in progress), March 2012. [61] Gómez-Goiri, A., Emaldi, M., López-de Ipiñaa, D. “A semantic resource oriented middleware for pervasive environments,” UPGRADE journal, vol. 2011, Issue No. 1, pp. 5– 16, feb 2011. [62] D. Guinard, V. Trifa, S. Karnouskos, P. Spiess, and D. Savio, "Interacting with the SOA- Based Internet of Things: Discovery, Query, Selection, and On-Demand Provisioning of Web Services", Services Computing, IEEE Transactions on, Vol.3, No.3, pp.223-235, doi: 10.1109/TSC.2010.3, 2010. [63] Dennis Pfisterer, Kay Römer, Daniel Bimschas, Oliver Kleine, Richard Mietz, Cuong Truong, Henning Hasemann, Alexander Kröller, Max Pagel, Manfred Hauswirth, Marcel Karnstedt, Myriam Leggieri, Alexandre Passant, and Ray Richardson. “SPITFIRE: Toward a Semantic Web of Things”, IEEE Communications Magazine, pp. 40-48, November, 2011. [64] A. Daouadji, K.-K. Nguyen, M. Lemay, M. Cheriet, “Ontology-based Resource Description and Discovery Framework For Low Carbon Grid Networks”, in Proceedings of the First IEEE International Conference on Smart Grid Communications (SmartGridComm), pp. 477—482, 2010. [65] Kyoung-Min Kim, Jin-Hyuk Hong, Sung-Bae Cho, “Intelligent Web interface using flexible conversational agent with semantic Bayesian networks”, in Proceedings of the International Conference on Next Generation Web Services Practices, 22-26 Aug. 2005. [66] M. Rossi et al., “D2.3 Preliminary Context Model, Interfaces and Processing Mechanisms for Sensor Information Services”, Public SENSEI Deliverable, 2009 [67] Semantic Sensor Network Incubator Group, http://www.w3.org/2005/Incubator/ssn/wiki/Main_Page, 2005. [68] Giuseppe Pirro, Domenico Talia, Paolo Trunfio. “A DHT-based semantic overlay

92

D3.1 Open Service Layer network for service discover”, http://grid.deis.unical.it/papers/pdf/PirroTaliaTrunfioFGCS2012.pdf, 2012. [69]P. Barnaghi, M. Bauer, S. Meissner, “Service modelling for the Internet of Things“, FedCSIS Conference Proceedings, Guildfort, UK, 2011. [70] W3C Working Group, Semantic Sensor Network Ontology (SSN), http://www.w3.org/2005/Incubator/ssn/wiki/Report_Work_on_the_SSN_ontology, 2005. [71] B. M. Elahi et al., “Sensor Ranking: A Primitive for Efficient Content-Based Sensor Search”, in Proc. 2009 Intl. Conf. Info. Processing in Sensor Networks, 2009, pp. 217–28. [72] Internet-of-Things Architecture, IoT-A, “Concepts and Solutions for Privacy and Security in the Resolution Infrastructure”, Nils Gruschka, Dennis Gessner (Ed.), 2012. [73] Antonio F. Skarmeta, Alessandro Bassi, Trevor Peirce, and Antonio J. Jara. Internet of Things (IoT) and Future Internet (FI) Architecture, http://www.future-internet.eu/home/future- internet-assembly/aalborg-may-2012/22-iot-and-fi-architectures.html, Future Internet Assembly, 2012. [74] C. Jennings, Cisco, Ed. B. Lowekamp, Skype, E. Rescorla, RTFM Inc., S. Baset, H. Schulzrinne, Columbia University, “Resource Location And Discovery (RELOAD) Base Protocol”, draft-ietf-p2psip-base-22. [75] Z. Shelby,“RFC6690: Constrained RESTful Environments (CoRE) Link Format”, IETF, CoRE Working Group, 2012.

93