NEXOF-RA NESSI Open Framework – Reference Architecture IST- FP7-216446

DeliverableD7.1c State of the art report Alberto Sillitti, Angelo Gaeta, Antonio De Nigro, Debora Desideri, Francisco Garijo, Franciso Pérez-Sorrosal, Jose M. Cantera, Katharina Mehner, Marcos Reyes, Mike Fisher, Nikolaos Tsouroulas, Pascal Bisson, Piero Corte, Raffaele Piccolo, Ricardo Jiménez, Sven Abels, Yosu Gorroñogoitia

Due date of deliverable: 30/11/2009 Actual submission date: 09/12/2009

This work is licensed under the Creative Commons Attribution 3.0 License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. This work is partially funded by EU under the grant of IST-FP7-216446.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 1 of 191

Change History Version Date Status Author (Partner) Description 1.0 09/12/2009 Final All Partners This document is a restructured and extended survey of the previous version of the deliverable

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 2 of 191

EXECUTIVE SUMMARY One of the main objectives of NEXOF-RA is to produce the NEXOF Open Reference Architecture Specifications. The Architecture Specifications should integrate concepts from existing standards, models, architectures and extend and refine them. The goal of this document is to provide a structured survey of the state of the art to take a census of the most relevant service oriented architecture model/specification, standards and initiatives highlighting their strength and weakness. This survey aims to be as most exhaustive as possible with respect to the completeness of the existing standards and initiatives, even though it do not aims to provide a deep analysis for each of them. A detailed evaluation of the pros and cons of some standards and initiatives investigated in this state of the art will be performed in an afterwards phase only for the initiatives and standards that will be adopted by NEXOF-RA. This state of the art will be constantly updated to bear the rapid changes of the market and to envisage incoming standards and initiatives. To achieve this goal a world wide open process will be set up and maintained through the NEXOF web site. The open process aims to gather the recommendations coming from contributors, both internal contributors like NESSI Strategic Projects and external contributors not directly involved in the NESSI initiatives, to be apprised of the new standards and initiatives of interest for NEXOF-RA. This document is structured in three main parts. The first part focuses on the contribution from the Conceptual View analysis, performed within the project, of the architecture of NEXOF Compliant Infrastructure covering all areas of functionality that are required to support the successful implementation of SOA-based applications. Then, the standards investigated in the state of the art are categorized with respect to each area of functionality. The categorization highlights which standard is of interest for each area of functionality. The second part concerns all general aspects of a SOA and analyses some existing reference architecture models and specifications. The third part focuses on the census of the standards and initiatives surveyed in this state of the art report. An appendix to this document lists all the standards and initiatives that NEXOF-RA is aware of, but which are not included in the census performed in the third part of this document.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 3 of 191

Document Information

IST Project FP7 – 216446 Acronym NEXOF-RA Number Full title NESSI Open Framework – Reference Architecture Project URL http://www.nexof-ra.eu EU Project officer Arian Zwegers

Deliverable Number D7.1c Title State of the art report Work package Number 7 Title NEXOF Reference Architecture: Specifications

Date of delivery Contractual 30/11/2009 Actual 09/12/2009 Status Version 1.01, dated 09/12/2009 final  Nature Report  Demonstrator  Other  Abstract In this state of the art report we explore the most important service oriented architecture (for dissemination) model/specification and standards highlighting theirs strengths and weaknesses. Keywords SOA, SOA models, SOA standards, SOA specifications

Internal reviewers All partners

Authors (Partner) Alberto Sillitti (TIS), Angelo Gaeta (MOMA), Antonio De Nigro (ENG), Debora Desideri (ENG), Francisco Garijo (TID), Franciso Perez (UPM), Jose M. Cantera (TID), Katharina Mehner (SIE), Marcos Reyes (TID), Mike Fisher (BT), Nikolaos Tsouroulas (TID), Pascal Bisson (THA), Piero Corte (ENG), Raffaele Piccolo (MOMA), Ricardo Jimenez (UPM), Sven Abels (TIE), Yosu Gorroñogoitia (ATOS) Responsible Piero Corte Email [email protected] Author Partner Engineering Phone 0649201416

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 4 of 191

TABLE OF CONTENTS

EXECUTIVE SUMMARY ...... 3 TABLE OF CONTENTS ...... 5 1 INTRODUCTION ...... 10 2 CONCEPTUAL VIEW OF THE NEXOF ARCHITECTURE ...... 11 2.1 Categorization of the standards, initiatives and products ...... 13 3 STATE OF ART OF SOA ...... 19 3.1 Introduction ...... 19 3.2 Web Service Architecture ...... 19 3.2.1 WSA model ...... 20 3.2.2 Functionalities...... 23 3.3 SCA ...... 25 3.4 OASIS Reference Model for SOA ...... 29 3.4.1 OASIS Conceptual Model ...... 30 3.5 OASIS Reference Architecture Foundation for SOA ...... 34 3.6 SeCSE Conceptual Model ...... 42 3.6.1 Agents and actors ...... 42 3.6.2 Core conceptual model ...... 43 3.6.3 Types of Services ...... 44 3.6.4 Service description ...... 45 3.6.5 Facets ...... 46 3.6.6 Service discovery ...... 47 3.6.7 Service composition ...... 48 3.6.8 SLA Negotiation...... 49 3.6.9 Service monitoring ...... 50 3.6.10 Service publication ...... 51 3.6.11 SeCSE methodology ...... 52 4 SURVEY OF RELEVANT STANDARDS, INITIATIVES AND PRODUCTS ...... 55 4.1 Introduction ...... 55 4.2 Standards and initiatives ...... 55 4.2.1 Alert Standard Format (ASF). DMTF ...... 55 4.2.2 Availability Management for Java. JCP ...... 56 4.2.3 Asynchronous Service Access Protocol (ASAP). OASIS ...... 57 4.2.4 BACnet. ASHRAE ...... 58 4.2.5 BPEL4People. OASIS ...... 58 NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 5 of 191

4.2.6 Business Process Execution Language (BPEL4WS 1.1/2.0). OASIS ...... 59 4.2.7 Business Process Definition Metamodel (BPDM). OMG ...... 61 4.2.8 Business Process Management Language BPML 1.0 ...... 61 4.2.9 Business Process Modeling Language BPMN 1.1. OMG ...... 62 4.2.10 Business Transaction Protocol (BTP). OASIS ...... 63 4.2.11 CC/PP – UAProf. W3C ...... 65 4.2.12 Common Diagnostic Model (CDM). DMTF ...... 65 4.2.13 Common Information Model (CIM). DMTF ...... 66 4.2.14 Common Management Information Protocol (CMIP)/Common Management Information Service (CMIS). ITU ...... 68 4.2.15 Common raid Disk Data Format (DDF). SNIA ...... 70 4.2.16 Configuration Description, Deployment and Lifecycle Management Component Model Specification (1.0). CDDLM-WG ...... 72 4.2.17 Configuration Description, Deployment and Lifecycle Management Deployment API (1.0). CDDLM-WG ...... 73 4.2.18 Configuration Description, Deployment and Lifecycle Management SmartFrog- Based Language Specification (1.0) and CDDML Configuration Description Language Specification (1.0). CDDLM-WG ...... 73 4.2.19 Content Selection for Device Independence (DISelect). W3C ...... 75 4.2.20 Delivery Context: Client Interfaces (DCCI). W3C ...... 75 4.2.21 Delivery Context Ontology. W3C ...... 76 4.2.22 Desktop and mobile Architecture for System Hardware (DASH). DMTF ...... 76 4.2.23 Device Description Repository (DDR) Simple API. W3C ...... 79 4.2.24 Device Independent Authoring Language (DIAL). W3C ...... 79 4.2.25 Digital Signature Service (DSS) Core Protocols. OASIS ...... 80 4.2.26 Distributed Resource Management Application API Specification (1.0). OGF/GGF ...... 80 4.2.27 ECperf Benchmark Specification. JCP ...... 81 4.2.28 Electronic Business Extensible Markup Language (ebXML). OASIS ...... 81 4.2.29 EJB Security ...... 83 4.2.30 Enterprise Class Syslog Management ...... 84 4.2.31 eXtensible Access Control Markup Language (XACML). OASIS ...... 84 4.2.32 eXtensible Access Method (XAM). SNIA ...... 84 4.2.33 FCAPS. ITU ...... 87 4.2.34 Formalism for visual security protocol modelling ...... 88 4.2.35 HTML 5. W3C ...... 88 4.2.36 iSCSI Management API (IMA). SNIA ...... 89 4.2.37 Information Technology Infrastructure Library (ITIL). OGC ...... 89 4.2.38 J2EE Activity Service for Extended Transactions. JCP ...... 91 4.2.39 J2EE APIs for Continuous Availability. JCP ...... 93

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 6 of 191

4.2.40 Java API for XML Transactions (JAXTX). JCP ...... 94 4.2.41 Java Authentication and Authorization Service (JAAS). SUN ...... 95 4.2.42 Java Data Object Secure Specification ...... 95 4.2.43 Java Management Extensions (JMX). JCP ...... 95 4.2.44 Java Secure Socket Extension (JSSE). SUN ...... 97 4.2.45 Java Security Manager. SUN...... 98 4.2.46 Java Transaction API (JTA). JCP ...... 98 4.2.47 MSR Specification Language ...... 99 4.2.48 Message Trasmission Optimization Mechanism (MTOM). W3C...... 100 4.2.49 Multipath Management API (MMA). SNIA ...... 100 4.2.50 NETCONF. IETF ...... 101 4.2.51 Network Management Architectural Model ...... 102 4.2.52 OMA-DPE. OMA ...... 103 4.2.53 OWL-based Web Service Ontology (OWL-S). W3C ...... 103 4.2.54 Platform for Privacy Preferences Project. W3C ...... 105 4.2.55 Process Definition for Java. JCP ...... 106 4.2.56 Programming language with privacy-preserving features ...... 106 4.2.57 RDFa. W3C ...... 106 4.2.58 RMI-SSL (Remote Method Invocation). SUN...... 108 4.2.59 RSS ...... 108 4.2.60 SCXML. W3C ...... 109 4.2.61 Security Assertion Mark-up Language (SAML). OASIS ...... 110 4.2.62 Semantic Annotation for WSDL (SAWSDL). W3C ...... 110 4.2.63 Service Availability Forum Specifications. SA Forum ...... 112 4.2.64 Simple Network Management Protocol (SNMP) v3. IETF ...... 118 4.2.65 SOAP 1.1/1.2. W3C ...... 121 4.2.66 SOAP Message Security. OASIS ...... 121 4.2.67 SPECjAppServer2004. SPEC ...... 122 4.2.68 Storage Management Initiative (SMI-S). SNIA...... 122 4.2.69 SVG. W3C ...... 123 4.2.70 Systems Management Architecture for Server Hardware (SMASH). DMTF ... 124 4.2.71 System Management BIOS (SMBIOS). DMTF ...... 125 4.2.72 TPC-App. TPC ...... 126 4.2.73 TPC-C. TPC...... 127 4.2.74 TPC-E. TPC ...... 128 4.2.75 UDDI. OASIS ...... 129 4.2.76 Web-Based Enterprise Management (WBEM). DMTF ...... 131 4.2.77 Web Services Choreography Description Language (WS-CDL). W3C ...... 132 4.2.78 Web Service Choreography Interface WS-CI. W3C ...... 133

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 7 of 191

4.2.79 Web Service Description Language (WSDL). W3C ...... 133 4.2.80 Web Services Distributed Management (WSDM). OASIS ...... 136 4.2.81 Web Service Interoperability (WS-I) Basic Profile. WS-I ...... 140 4.2.82 Web Services Level Agreement (WS-LA). IBM...... 140 4.2.83 Web Services Modeling Ontology (WSMO) - Web Services Modeling Language (WSML). ESSI WSMO Working Group ...... 141 4.2.84 Web Service Semantics (WSDL-S). W3C ...... 143 4.2.85 Widgets specs. W3C ...... 144 4.2.86 Windows Management Instrumentation. Microsoft ...... 145 4.2.87 Workflow XML (Wf-XML). WfMC ...... 145 4.2.88 WS Addressing. W3C ...... 146 4.2.89 WS Agreement. OGF/GGF ...... 146 4.2.90 WS Composite Application Framework 1.0 (WS-CAF). OASIS ...... 147 4.2.91 WS Conversation Language WS-CL. W3C ...... 151 4.2.92 WS Enumeration. W3C ...... 151 4.2.93 WS Federation. OASIS ...... 152 4.2.94 WS Human Task (WS-HT). SAP ...... 152 4.2.95 WS Management. DMTF ...... 154 4.2.96 WS Metadata Exchange. BEA System / IBM / Microsoft / SAP ...... 154 4.2.97 WS Notification, WS Eventing, WS EventNotification. OASIS, W3C ...... 155 4.2.98 WS Policy. W3C ...... 156 4.2.99 WS Policy Framework and Closely Related Standards. W3C ...... 157 4.2.100 WS Reliable Messaging 1.1. OASIS ...... 158 4.2.101 WS Resource Framework. OASIS ...... 160 4.2.102 WS Resource Transfer (WS-RT). W3C ...... 160 4.2.103 WS SecureConversation Specification. OASIS...... 161 4.2.104 WS Security. OASIS ...... 162 4.2.105 WS Transactions (WS-TX). OASIS ...... 162 4.2.106 WS Transfer. W3C ...... 165 4.2.107 WS Trust. OASIS ...... 166 4.2.108 Xforms. W3C ...... 166 4.2.109 XHTML 1.1. W3C ...... 168 4.2.110 XHTML 2.0. W3C ...... 169 4.2.111 XML Events. W3C ...... 170 4.2.112 XML Process Definition Language (XPDL). WfMC ...... 171 4.3 Products ...... 171 4.3.1 Active BPEL ...... 171 4.3.2 Amazon Web Services ...... 172 4.3.3 Google Application Engine ...... 173

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 8 of 191

4.3.4 Google BigTable ...... 174 4.3.5 Google File System ...... 175 4.3.6 Google MapReduce ...... 177 4.3.7 IRS3 ...... 178 4.3.8 JBOSS jBPM ...... 179 4.3.9 NovaBPM: Nova Orchestra/Bonita ...... 180 4.3.10 Salesforce.com ...... 181 4.3.11 SeCSE Registry ...... 182 4.3.12 Triple Space...... 184 4.3.13 Windows Azure Platform ...... 185 4.3.14 WSMX ...... 187 APPENDIX A: OTHER STANDARDS AND INITIATIVES ...... 189

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 9 of 191

1 INTRODUCTION One of the main objectives of NEXOF-RA is to produce the NEXOF Open Reference Architecture Specifications. The Architecture Specifications should integrate concepts from existing standards, models, architectures and extend and refine them. In this state of the art report we explore the most important architecture model/specification, standards and initiatives highlighting theirs strengths and weaknesses.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 10 of 191

2 CONCEPTUAL VIEW OF THE NEXOF ARCHITECTURE The conceptual view of the architecture of the NEXOF Compliant Infrastructure is designed to cover areas of functionality that are required to support the successful implementation of SOA-based applications. It is the result of a logical set of deductions, which are based on the nature, and objectives of SOA as they are commonly recognized within the most important initiatives that aim at providing a reference specification for it, such as OASIS and W3C. The identification of the areas of functionality is performed on a high level of abstraction from the point of view of a software architect, designer and developer respectively. This section introduces all the aspects that a general-purpose SOA Infrastructure has to provide in order to help and accelerate the development of services and service-based applications in different business domains. Starting from that, the conceptual architecture view of the NEXOF Compliant Infrastructure is organized according to the following separation of concerns: Services, to support the creation and execution of services Messaging, to enable service interaction Discovery, to support the discovery of (provided and required) services Composition, to support the composition of services Analysis, to provide tools to analyse the services it supports with respect to the business goals Presentation, to support presentation interfaces to enable human-users to interact with services and applications Management, to support the management and monitoring of its core components and of services and applications it supports Security, to provide mechanisms and policies to accomplish different level of security Resources, to provide computational resources to execute the infrastructure

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 11 of 191

Separation of concerns

Each concern is described to better illustrate the functionality and the information that the platform correspondingly has to provide and manage to meet the specific requirements. The Services concern is dedicated to capture the functionalities needed to support the creation (design, implementation, testing and deployment) and execution of services. It addresses the methods for service creation including those features provided by the platform that enable new service to be created from scratch as well as to enable existing applications to be exposed as service implementations. It also covers functionalities to modify and create a new compatible version of existing services and, finally, validate and test them. The Messaging concern addresses the communication capability that allows applications or services to interact with other services. The term ―messaging‖ reflects the fact that such a communication capability is based on message exchanging. Connectivity needs to be ―connection-independent‖ – that is, the service requestor and service provider should be loosely-coupled rather than tightly-bound. The Discovery concern covers functionalities that are needed to support the discovery of services. It also addresses mechanisms and policies to make discovery effective, such as the publication of service descriptions into service registries and similar other aspects. With respect to the context in which service discovery is done, it can be made a distinction between discovery at design-time and discovery at run-time. Furthermore, another distinction can be made with respect to how a consumer becomes aware of the existence of services, that is the ―pull policy‖, if consumers directly search on a registry to find some services, and ―push policy‖, if a consumer is notified, with respects to its subscribed interests, as soon as a new service is published. The Composition concern addresses the creation and execution of software processes. It covers both, the support at design-time (orchestration, choreography) and at run-time NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 12 of 191

(static/dynamic composition). The result of the composition of services is called process. It also addresses the ability to support dynamic process reconfigurations, delivering much improved business agility. The Analysis concern is about the functionalities that support the analysis of information related to the execution of services and processes with respect to business requirements. Similarly to business intelligence for data warehouse, it provides business intelligence analysis related to process execution in order to facilitate the governance of SOA based systems. The Presentation concern addresses mechanisms to enable human users to interact and make use of the functionality provided by the overall platform. It covers both the offering of graphical user interfaces for supporting human user interactions and APIs to enable the creation, customization and execution of such graphical user interfaces. The Management concern is devoted to the management and monitoring of services and processes and, more in general, to the management and monitoring of the usage of platform functionalities. The Security concern addresses the support to manage the security of the system. It covers aspects concerning user/client authentication, access control authorization, link level encryption or network segregation of messages, denial of service attacks, tampering with information flows. The Resources concern deals with the ―Infrastructure" of NEXOF Compliant Infrastructure. It includes the computational resources needed to support the execution of the software components that constitute the platform: virtualization of computing, storage and network resources. In this context, software components, such as service containers, message brokers, service registries, process engines that are commonly considered SOA Infrastructure components, are not considered as part of the ―Resources‖ subsystem (the Infrastructure of the SOA Infrastructure).

2.1 Categorization of the standards, initiatives and products This section provides a categorization of the standards, initiatives and products censed in this state of the art with respect to the concerns above described. The goal of such categorization is to provide, in a tabular form, a reference for each area of functionalities about the standards, initiatives and products that are of interests for the specific topics addressed by each concern. Each row of the table provides a reference to the NEXOF-RA project workpackage which proposes the survey of specific standard, initiative or product.

Services Messaging Discovery Composition Analysis Presentation Management Security Resources

Standards / Initiatives / Products WP

Standards and Initiatives Alert Standard Format (ASF) WP3  Availability Management for Java WP4 

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 13 of 191

Asynchronous Service Access Protocol WP2  (ASAP) BACnet. ASHRAE WP3  BPEL4People WP2  BPEL4WS 1.1/2.0 WP2  BPDM WP2  BPML 1.0 WP2   BPMN 1.1 WP2   Business Transaction Protocol (BTP) WP2  CC/PP – UAProf WP1  Common Diagnostic Model (CDM) WP3  Common Information Model (CIM) WP3  Common Management Information WP3  Protocol/Service (CMIP/CMIS) Common raid Disk Data Format (DDF) WP3  CDDLM Component Model Specification WP4  (1.0) CDDLM Deployment API (1.0) WP4  CDDLM SmartFrog-Based Language Specification (1.0) / Configuration WP4  Description Language Specification (1.0) DISelect WP1  DCCI WP1  Delivery Context Ontology WP1  Desktop and Mobile Architecture for WP3  System Hardware (DASH) Device Description Repository (DDR) WP1  Simple API Device Independent Authoring WP1  Language (DIAL) Digital Signature Service (DSS) Core WP4  Protocols Distributed Resource Management WP4   Application API Specification 1.0 ECperf Benchmark Specification WP4  ebXML WP2     EJB Security WP4  Enterprise Class Syslog Management WP3  eXtensible Access Control Markup WP4  Language (XACML)

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 14 of 191

eXtensible Access Method (XAM) WP3  FCAPS WP3  Formalism for visual security protocol WP4  modelling HTML 5 WP1  iSCSI Management API (IMA) WP3  Information Technology Management WP3  Library (ITIL) J2EE Activity Service for Extended WP2    Transactions J2EE APIs for Continuous Availability WP4  Java API for XML Transactions (JAXTX) WP2    ava Authentication and Authorization WP4  Service (JAAS) Java Data Object Secure Specification WP4  Java Management Extensions (JMX) WP4  Java Secure Socket Extension (JSSE) WP4  Java Security Manager WP4  Java Transaction API (JTA) WP2    MSR Specification Language WP4  Message Trasmission Optimization WP2  Mechanism (MTOM) Multipath Management API (MMA) WP3  NETCONF WP3  Network Management Architectural WP3  Model - Network Management System OMA-DPE WP1  OWL-based Web Service Ontology WP2     (OWL-S) Platform for Privacy Preferences Project WP4  Process definition for java WP2  Programming language with privacy- WP4  preserving features RDFa WP1  RMI-SSL WP4  RSS WP1  SCXML WP1  Security Assertion Markup Language WP4  (SAML) Semantic Annotation for WSDL WP2     (SAWSDL)

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 15 of 191

Service Availability Forum Specifications – Application Interface WP4  Specification Service Availability Forum Specifications – Hardware Platform WP4   Interface Specification Simple Network Management Protocol WP3  (SNMP) v3 SOAP 1.1/1.2 WP2  SOAP Messaging Security WP4  SPECjAppServer2004 WP4  Storage Management Initiative (SMI-S) WP3  SVG WP1  System Management Architecture for WP3  Service Hardware (SMASH) System Management BIOS (SMBIOS) WP3  TPC-App WP4  TPC-C WP4  TPC-E WP4  UDDI WP2   Web-Based Enterprise Management WP4  (WBEM) Web Services Choreography WP2  Description Language (WS-CDL) Web Service Choreography Interface WP2   (WS-CI) Web Service Description Language WP2   (WSDL) Web Services Distributed Management WP4  (WSDM) Web Service Interoperability (WS-I) WP2  Basic Profile Web Services Level Agreement (WS-LA) WP4  Web Services Modeling WP2    Ontology/Language (WSMO – WSML) Web Service Semantics (WSDL-S) WP2  Widgets specs WP1  Windows Management Instrumentation. WP3  Microsoft Workflow XML (Wf-XML) WP2   WS Addressing WP2  WS Agreement WP4 

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 16 of 191

WS Composite Applcation Framework WP2  1.0 (WS-CAF) WS Conversation Language (WS-CL) WP2   WS Enumeration WP2  WS Federation WP4  WS Human Task (WS-HT) WP2   WS Management WP4  WS Metadata Exchange WP2  WS Notification, WS Eventing, WS WP2 EventNotification WS Policy WP4  WS Policy Framework and Closely WP2  Related Standards WS Reliable Messaging 1.1 WP2   WS Resource Framework WP2   WS Resource Transfer WP2  WS SecureConversation Specification WP4  WS Security WP4  WS Transaction (WS-TX) WP2    WS Transfer WP2  WS Trust WP4  XForms WP1  XHTML 1.1 WP1  XHTML 2.0 WP1  XML Events WP1  XML Process Definition Language WP2  (XPDL) Products Active BPEL WP2  Amazon Web Services WP4     Google Application Engine WP4       Google BigTable WP4  Google File System WP4   Google MapReduce WP4  IRS3 WP2     JBoss jBPM WP2  Nova Orchestra/Bonita WP2  Salesforce.com WP4       SeCSE Registry WP2  

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 17 of 191

Triple Space WP2   Windows Azure platform WP4      WSMX WP2    

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 18 of 191

3 STATE OF ART OF SOA

3.1 Introduction Service Oriented Architecture (SOA) is used for referring an approach for modelling software architectures. There is an increasingly widespread acceptance of SOA as a paradigm for integrating software applications within and across organizational boundaries. SOA is an architectural style based on the concept of service; it can be seen as a set of offered functionalities. Service consumers view a service simply as an endpoint that supports a particular request format or contract. Service consumers are not concerned with how the service goes about executing their requests; they expect only that it will. Consumers also expect that their interaction with the service will follow a contract, an agreed-upon interaction between two parties. The way the service executes tasks given to it by service consumers is irrelevant. The most important aspects of SOA are the following:

services are distributed and under the control of different ownership. service implementation is separated from its interface. In other words, it separates the ―what‖ from the ―how.‖

The term SOA has been adopted by the main ICT actors and by the main standardization groups, in fact enterprise architects believe that SOA can help businesses respond more quickly and cost-effectively to changing market conditions. The main drivers for SOA- based architectures are to facilitate the manageable growth of large scale enterprise systems, to facilitate Internet-scale provisioning and use of services and to reduce costs in organization to organization cooperation. SOA can also provide a solid foundation for business agility and adaptability. In some respects, SOA can be considered an architectural evolution rather than a revolution and captures many of the best practices of previous software architectures. It has been influenced by ideas such as CORBA or ESBs. SOA promotes the goal of separating users (consumers) from the service implementations. Services can therefore be run on various distributed platforms and be accessed across networks. This can also maximize reuse of services. Anyway there is no a shared agreement about SOA definition and principles. Following in this chapter there is a review of the opinions of relevant organizations, such as W3C, OASIS, IBM, etc. to highlight the common principles and the differences among these initiatives proposed for SOA.

3.2 Web Service Architecture The Web Service Architecture (WSA)[1] document is intended to provide a common definition of a Web service, and define its place within a larger Web services framework to guide the community. The WSA provides a conceptual model and a context for understanding Web services and the relationships between the components of this model. The architecture does not attempt to specify how Web services are implemented, and imposes no restriction on how Web services might be combined. The WSA describes both

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 19 of 191

the minimal characteristics that are common to all Web services, and a number of characteristics that are needed by many, but not all, Web services. Moreover it identifies those global elements of the global Web services network that are required in order to ensure interoperability between Web services. The document describes both a reference model and a reference architecture. The architecture is organized in the following four models:

Message Oriented Model Service Oriented Model Resource Oriented Model Policy Model.

Besides it describes:

Common functionalities of a SOA A common scenario for using web services.

3.2.1 WSA model Message Oriented Model The Message Oriented Model focuses on those aspects of the architecture that relate to messages and the processing of them without keeping into consideration the semantic significance of the content of the message. It also focuses on message structure, message transport and so on.

Distributed applications in a Web service architecture communicate via message exchanges. A message is defined as the basic unit of data sent from one Web services agent to another in the context of Web services. The main parts of a message are its envelope, a set of zero or more headers, and the message body. Message Transport is a mechanism that may be used by agents to deliver messages.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 20 of 191

In conclusion, the essence of the message model revolves around a few key concepts illustrated above: the agent that sends and receives messages, the structure of the message in terms of message headers and bodies and the mechanisms used to deliver messages.

Service Oriented Model The Service Oriented Model focuses on aspects of service, action and so on. A service is realized by an agent and used by another agent. Services are mediated by means of the messages exchanged between requester agents and provider agents.

A service is an abstract resource that represents a capability of performing tasks that represents a coherent functionality from the point of view of provider entities and requester entities. To be used, a service must be realized by a concrete provider agent. A provider agent is an agent that is capable of and empowered to perform the actions associated with a service on behalf of its owner. An agent is a program acting on behalf of person or organization. Agents are programs that engage in actions on behalf of someone or something else. Agents realize and request Web services. In effect, software agents are the running programs that drive Web services, both to implement them and to access them. The provider entity is the person or organization that is providing a Web service. A very important aspect of services is their relationship to the real world: services are mostly deployed to offer functionality in the real world. The WSA specification captures this by elaborating on the concept of a service's owner, which, whether it is a person or an organization, has a real world responsibility for the service. The Service Oriented Model makes use of meta-data. This meta-data is used to document many aspects of services: from the details of the interface and transport binding to the semantics of the service and what policy restrictions there may be on the service. Providing rich descriptions is key to successful deployment and use of services across the Internet. For a Web service to be compliant with the WSA architecture there must be sufficient service descriptions associated with the service to enable its use by other parties. A service description is a set of documents that describe the interface and semantics of a service. It contains the details of the interface and, potentially, the expected behaviour of the service. This includes its data types, operations, transport protocol

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 21 of 191

information and address. It could also include categorization and other metadata to facilitate discovery and utilization. The complete description may be realized as a set of XML description documents. A service interface is the abstract boundary that a service exposes. It defines the types of messages and the message exchange patterns that are involved in interacting with the service, together with any conditions implied by those messages. The semantics of a service is the behaviour expected when interacting with the service. It expresses a contract (not necessarily a legal contract) between the provider entity and the requester entity. It expresses the intended real-world effect of invoking the service. Service semantics may be formally described in a machine readable form, identified but not formally defined, or informally defined via an "out of band" agreement between the provider entity and the requester entity.

Resource Oriented Model The Resource Oriented Model focuses on resources that exist and have owners.

The resource model is adopted from the Web Architecture concept of resource. A resource is defined to be anything that can have a unique identifier (a URI). This architecture is only concerned with those resources that have a name, may have reasonable representations and which can be said to be owned. From a real-world perspective, a most interesting aspect of a resource is its ownership: a resource is something that can be owned, and therefore have policies applied to it. A representation is a piece of data that describes a resource state. A resource description is any machine readable data that may permit resources to be discovered.

Policy Model Policies are about resources. They are applied to agents that may attempt to access those resources, and are put in place, or established, by people who have responsibility for the resource. A policy is a constraint on the behaviour of agents as they perform actions or access resources. There are many kinds of policies, some relate to accessing resources in particular ways, others relate more generally to the allowable actions an agent may perform.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 22 of 191

The Policy Model focuses on constraints on the behaviour of agents and services. The specification generalizes this to resources since policies can apply equally to documents (such as descriptions of services) as well as active computational resources.

Policies may be enacted to represent security concerns, quality of service concerns, management concerns and application concerns. 3.2.2 Functionalities Common functionalities of a SOA Here after a high level use case describing the functionalities of a SOA according to the WSA document.

Requester: it is used for representing both the Requester Entity and the Requester Agent. A requester entity is a person or organization that wishes to make use of a provider entity's Web service. It will use a requester agent to exchange messages with the provider entity's provider agent NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 23 of 191

Provider: it is used for representing both the Provider Entity and the Provider Agent. The provider entity is the person or organization that provides an appropriate agent (Provider Agent) to implement a particular service. Use services: the requester agent and provider agent exchange SOAP messages on behalf of their owners. Enact visibility: the requester and provider entities "become known to each other", in the sense that whichever party initiates the interaction must become aware of the other party. There are two ways this may typically occur: the requester entity may obtain the provider agent's address directly from the provider entity; or the requester entity may use a discovery service to locate a suitable service description. Discover service descriptions: the discovery service somehow obtains both the Web service description and an associated functional description of the service. The WSA does not specify or care how the discovery service obtains the service description or functional description. Agree on service descriptions: the requester entity and provider entity agree on the service description (a WSDL document) and semantics that will govern the interaction between the requester agent and the provider agent.

Common scenario for using web services Here after a common scenario describing the four broad steps involved in the process of engaging (using) a web service.

The requester and provider "become known to each other" o The discovery service somehow obtains both the Web service and an associated functional description of the service. o The requester entity supplies criteria to the discovery service to select a Web service description based on its associated functional description, capabilities and potentially other characteristics.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 24 of 191

o The discovery service returns one or more Web service descriptions that meet the specified criteria. The requester and provider entities agree on the semantics of the desired interaction. The service description and semantics are input to, or embodied in, both the requester agent and the provider agent, as appropriate. The requester agent and provider agent exchange SOAP messages on behalf of their owners.

References

1. W3C Web Services Architecture, http://www.w3.org/TR/ws-arch

3.3 SCA The SCA specification [1] defines a component model in which services are used to model the interaction between components. Components in SCA can be built with any technology. SCA defines a common assembly mechanism to specify how those components are combined to form applications. Even though SCA is technology independent, it is still vendor dependent as the communication between different components can be implemented differently by each SCA vendor. Inter vendor communication has to be realized by common standards such as Web Services. The atomic building block of the Service Component Architecture is a component. A component is an instance of an implementation plus an appropriate configuration. The configuration is expressed in the XML-based Service Component Definition Language (SCDL). It is used to describe the interaction of the component with the outside world.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 25 of 191

The figure above illustrates the structure of a component. Beside the implementation each component consists of three fundamental parts: services, references and properties. Services offer some operations to the clients of the component. Therewith they are able to use the components functionality. The description of services depends on the technology used. A component built in Java might use Java interfaces while another component that is built based on BPEL is likely to use WSDL for its service description. References are the inverse description of a service and determine what services a component uses. The use of references allows an instance of an SCA to handle inter component dependencies at runtime. Properties can be seen as parameters for an SCA-component. Each property contains a value that is assessed by the component on instantiation. Properties could be used to supply a component with information about its context.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 26 of 191

Components can further be assembled to larger blocks, called composites. The definition of a composite is also described in an SCDL configuration file. It defines what components belong to the composite and how they interact. The listing above contains an example of an SCDL configuration. The XML-root-node composite contains childnodes for every component or composite inside of the according composite. These childnodes contain childnotes for service, reference or property definitions. The components inside a composite can run on a single machine or be distributed across machine boundaries. Descriptions of a composite also contain the outside view of the composite. This view is similar to the outside view of a component and includes exposed

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 27 of 191

services, references and properties.

Composites can contain components or other composites. This allows for the creation of complex hierarchical structures. In addition to the description of possible interactions between components and composites through services and references, it is possible to specify explicit bindings. Figure 3 shows an example of a SCA-Composite. It contains the two components A and B with A using a service offered by B. This connection is modelled using a wire element. A wire element is used to separate the wiring from the definition of the services and references of the components and composites. In this example Component A would reference the wire and the wire would be defined to use the service offered by Component B. The composite shown in this example exposes the service offered by Component A and references the service that is referenced by Component B. It also exposes two properties to the outside world which are mapped to properties of components A and B. Figure 4 contains an example of a composite containing four composites.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 28 of 191

The outer boundary of an SCA-based system is called a domain. Domains contain both components and composites and are tied to instances of an SCA from a single vendor. The parts and structure of a domain is defined in an SCDL-file that also contains all SCDL descriptions of components and composites inside of the domain. The concrete interaction between components in a domain is defined by the SCA-vendor. Interdomain interaction is not part of the SCA specifications and has to be realized using standards like Web Services. Nevertheless, it is possible to specify the existence of an interdomain interaction and the protocols used as an explicit binding in the SCDL configuration.

References

1. Specifications V1.0, http://www.osoa.org

3.4 OASIS Reference Model for SOA The OASIS Reference Model for Service Oriented Architecture[1], as explicitly stated in the abstract of the reference, focus on the definition of ‖an abstract framework for understanding significant entities and relationships between them within a service-oriented environment‖. The OASIS Reference Model for Service Oriented Architecture aims at unifying concepts of SOA and has been developed primarily to address problems related to the adoption of this new paradigm (SOA) in an increasing number of contexts and specific technology implementations. Sometimes, the term is used with differing - or worse, conflicting - understandings of implicit terminology and components. This Reference Model is being developed to encourage the continued growth of different and specialized SOA implementations whilst preserving a common layer of understanding about what SOA is. While service-orientation may be a popular concept found in a broad variety of applications, this reference model focuses on the field of software architecture. The concepts and relationships described may apply to other ‘service‘ environments; however, NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 29 of 191

this specification makes no attempt to completely account for use outside of the software domain. It may be used by architects developing specific service oriented architectures or in training and explaining SOA and is also useful for the development of consistent standards or specifications supporting that environment. The reference model covers different perspectives mainly related to structural and behavioural aspects of a SOA. In particular, it introduces concepts that relate to the dynamic aspects of service and concepts that refer to the meta-level aspects of services such as service description and policies as they apply to services. 3.4.1 OASIS Conceptual Model The main characteristic that it is needed to remark is that the OASIS SOA-RM is a Reference Model. It is not a Reference Architecture. It is important to make the distinction between the SOA Reference Model and the SOA Reference Architecture, in particular it is important to highlight that:

The primary contribution of the Reference Model is that it identifies the key characteristics of SOA, and it defines many of the important concepts needed to understand what SOA is and what makes it important. A Reference Architecture takes the Reference Model as its starting point in particular in relation to the vocabulary of important terms and concepts The Reference Architecture goes a step further than the Reference Model in that it tries to show how you might actually have SOA-based systems. Consequently, how they are used and managed is at least as important architecturally as how they are constructed In terms of approach, the primary difference between the Reference Model and the Reference Architecture is that the former focuses entirely on the distinguishing features of SOA; whereas the Reference Architecture introduces concepts and architectural elements as needed in order to fulfill the core requirement of realizing SOA-based systems.

Another important characteristic of the OASIS SOA-RM is that it aims at being complete, that is it tries to define a complete reference model where all the aspects related to SOA can be accommodated. It does not address specific aspect of SOA in a detailed manner, neither it proposes solutions for particular problems. It simply aims to introduce the main concepts of a SOA in order that they are sufficient to position different approaches and solutions within a coherent view. The main topics covered by the OASIS SOA-RM concern the following aspects:

Service Description Visibility of Services Interacting with Services Policies and Contracts

Service Description

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 30 of 191

SOA depends on a wide variety of descriptions to characterize the needs and capabilities it can facilitate connecting. Description elements, such as those indicating the real world effects produced by a service and those desired by the consumers, provide the basis for determining the match between consumers and providers. Policies and attributes that are needed to evaluate policy compliance are also important elements to determine the conditions under which interactions may be initiated and continue to completion, and description can inform as to which policies may or must be applied. For SOA to enable efficient connectivity between providers and consumers, descriptions must provide sufficient information to achieve visibility between the provider and consumer and to support continued interaction. The information provided by description may be augmented during the interaction. For example, the interaction may reach a point where message exchanges must be encrypted; it may or may not be important that the description indicate that at some point encrypted messages may be required. The critical point is that this additional information becomes available during the interaction and neither the provider nor the consumer is required to have undocumented a priori details about the other, including details of their needs and capabilities, in order for interaction to be initiated or proceed. Several points to make:

The current view focuses on the description of services but it is equally important to consider the description of the consumer. Descriptions are inherently incomplete. The necessary elements of description depend on the context. The intent of "standard" description sets is to capture "essential" information, i.e. that most likely to be needed. It should be understood that what is considered essential will change over time. A requirement for transparency of transactions may require additional description for those associated contexts. Description always proceeds from a basis of what is considered "common knowledge". This may be social conventions that are commonly expected or possibly codified in law. It is impossible to describe everything and it can be expected that a mechanism as far reaching as SOA will also connect entities where there is inconsistent "common" knowledge. Description from the provider and consumer are the essential building blocks for establishing the execution context of an interaction.

Visibility of Services The OASIS SOA-RM conceptualizes visibility as the capacity for those with needs and those with capabilities to be able to see each other. Visibility is the relationship between service consumers and providers that is satisfied when they are able to interact with each other. As depicted in Figure 1-1, a concept diagram from the OASIS SOA-RM, preconditions for visibility are awareness, willingness, and reachability. These preconditions lead to participant interaction.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 31 of 191

OASIS SOA-RM Visibility

Awareness is the knowledge of existence between a service consumer and a service provider. Awareness does not imply willingness or reachability. Service awareness requires that a Service Description – or at least a suitable subset thereof – be available so that a potential consumer is aware of the existence and capabilities of the service. Willingness is a stance held by the participating entities that predisposes them to take actions necessary to realize the real world effects of a service. Willingness to engage in service interactions may be the subject of policies. Those policies may be documented in the service description. Reachability is the relationship between service participants where they are able to interact. A service and a consumer may have awareness and willingness to interact, but if there is no communication path between the consumer and provider then the service is not visible to the consumer.

Interacting with Services Interacting with a service involves performing actions against the service. In many cases, this is accomplished by sending and receiving messages, but there are other modes possible that do not involve explicit message transmission. For example, a service interaction may be effected by modifying the state of a shared resource. However, for simplicity, message exchange is often referred as the primary mode of interaction with a service.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 32 of 191

OASIS SOA-RM Service Interaction The Figure illustrates the key concepts that are important in understanding what it is involved in interacting with services; these revolve around the service description – which references a information model and a behaviour model. The information model of a service is a characterization of the information that may be exchanged with the service. Only information and data that are potentially exchanged with a service are generally included within that service's information model. Loosely, one might partition the interpretation of an informational block into structure (syntax) and semantics (meaning); although both are part of the information model. The second key requirement for successful interactions with services is knowledge of the actions invoked against the service and the process or temporal aspects of interacting with the service. This is characterized as knowledge of the actions on, responses to, and temporal dependencies between actions on the service. The action model of a service is the characterization of the actions that may be invoked against the service. Of course, a great portion of the behaviour resulting from an action may be private; however, the expected public view of a service surely includes the implied effects of actions. The process model characterizes the temporal relationships and temporal properties of actions and events associated with interacting with the service. Beyond the straightforward mechanics of interacting with a service there are other, higher- order, attributes of services‘ process models that are also often important. These can include whether the service is idempotent, whether the service is long-running in nature and whether it is important to account for any transactional aspects of the service.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 33 of 191

Policies and Contracts The OASIS SOA-RM conceptualizes a policy as the representation of a constraint or condition on the use, deployment, or description of an owned entity as defined by any participant. A contract is a representation of an agreement between two or more participants. Next figure is a concept diagram from the OASIS SOA-RM for policies and contracts. Core aspects of contracts and policies are an assertion, an owner, the participants in agreement, and enforcement of the policy or contract. An assertion may be an expression of a policy and/or a contract. Assertions are enforceable and measurable statements about the way a service is realized.

OASIS SOA-RM Policies and Contracts When conducting business via services in a SOA, policies and contracts have meaning derived by the participants in the social.

References

1. OASIS Reference Model for Service Oriented Architecture, http://docs.oasis- open.org/soa-rm/v1.0/soa-rm.pdf

3.5 OASIS Reference Architecture Foundation for SOA The OASIS Reference Architecture Foundation for Service Oriented Architectures [1] (from now on OASIS SOA-RA) describes an abstract realization of a SOA that provides a template to build a concrete SOA. It aims to gather the key concepts and elements that are

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 34 of 191

present in any well designed SOA-based system whereas it does not aims to embrace all the possible technologies needed to realize the SOA-based systems. The OASIS SOA-RA is built on top of the OASIS Reference Model [2] which provides its primary contribution on the definition of the most important concepts that characterizes a SOA, and it goes a step further by showing how a SOA-based system can be used, realized and owned. It is important to state that the description of the elements and their relationships enabling the usage, the realization and the management of a SOA-based system is independent by specific concrete technologies. The OASIS committee made a distinction between a Reference Architecture Foundation and a Reference Architecture, i.e. while the former is defined, as said above, as an abstract realization of SOA, the latter is defined as an enhancement of the reference model, by providing not only a model for the abstract architectural elements used to implement a SOA-based system but also a description of what is involved in the realization of the modelled entities. The OASIS SOA-RA start from the assumptions that the resources, the governance of the system and the peoples interacting through the system are distributed across the ownership boundaries. Furthermore, the interaction between people using the system is based on a reliable exchange of messages. This assumptions does not exclude the contexts where apparently does not exists ownership boundaries, like SOA within an enterprise. Indeed, even in a single organization can exists departments and groups that acts as though they had ownership boundaries. The SOA-RA is presented by three different viewpoints, each of them is referred to certain stakeholders and deals with different concepts. In particular, the Service Ecosystem viewpoint aims to captures what SOA means for people using such systems, to provide safe and effective support for their business; the Realizing Service Oriented Architectures viewpoint deals with the requirements for realizing a SOA and supports the effective development of a SOA-based system, involving stakeholders like system architects and business analysts; and finally, Owning Service Oriented Architecture viewpoint is referred to service consumers, service providers and system architects and deals with the issues involved in the management of a SOA. The three principal goals of the OASIS SOA-RA concern the effectiveness, the confidence and the scalability. The effectiveness is the ability of the SOA-based systems to enable the interaction among participants and specific facilities that meets their needs; service consumers and service providers should use the SOA-based system with a certain level of confidence to conduct at the best their business; finally, the scalability is the ability of the system to smoothly grown in complexity when the number and the complexity of the services and the interaction between participants increase. OASIS SOA-RA is based on the following principles acting as a guide for its evolution:

− Technology neutrality, that is the independence of the OASIS SOA-RA on any concrete technology − Parsimony, concerning the economy of design, i.e. the optimization of the design to reduce the complexity, even by means of the minimization of the number of components and relationships needed − Separation of concerns, focusing on provisioning of several loose coupled models, i.e. it focus on a clear separation of the architectural models so that each of them NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 35 of 191

gathers the concepts that are of interest to a well defined area. Thus, it helps the stakeholders to focus only on the part of the architecture that are of interest of its specific needs − Applicability, that fixes the scope for the OASIS SOA-RA. Indeed, the OASIS SOA- RA is referred to different domains and aims to be relevant for the problems that are raised in such defined domains. The domains of interest are ―Intranet SOA‖, i.e. SOA within an enterprise, ―Internet SOA‖, i.e. SOA outside the enterprise, ―Extranet SOA‖, i.e. SOA with suppliers and trading partners, and ―net-centric SOA‖.

From now on, there is a more detailed description of the three views mentioned above, that are Service Ecosystem, Realizing Service Oriented Architectures, and Owning Service Oriented Architectures viewpoints.

Service Ecosystem viewpoint As defined by OASIS SOA-RA, ―the Service Ecosystem View focuses on what a SOA- based system means for people to participate in it to conduct their business‖. This definition draws the attention on people that use the SOA-based system. Indeed, SOA implies the usage of artefacts and resources of the system but usually it is not the primary interest of the participants that are mainly interested to use the system to achieve some goal. In this view the primary objective is the modelling of the people involved and their goals and activities, by describing what it means to operate in a SOA ecosystem when the participants may be in different organizations. This view introduces three main models, that are:

− Acting in a SOA ecosystem model, that introduces the key concepts involved in actions − Social structure model, that introduces the key elements required to define relationships between participants − Acting in a social context model that merges the previous two models and shows how the risks, the ownerships and the transactions are key concepts in the SOA ecosystem.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 36 of 191

Models elements described in the Service Ecosystem view1 Without goes to much in dept on each model, it is of particular interest for NEXOF the Acting in a SOA ecosystem model which describes the key principles of action as an abstract concept. A first distinction applies to the concept of action and joint action.

Actions, Real World Effect and Events2

1 OASIS Reference Architecture Foundation for Service Oriented Architecture, p. 25 2 OASIS Reference Architecture Foundation for Service Oriented Architecture, pp. 27-28-30

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 37 of 191

Joint Action2

The action is defined as ―the application of intent to achieve an effect (within the SOA ecosystem)‖. On the other hand, the joint action is defined as ―a coordinated set of actions involving the efforts of two or more actors to achieve an effect‖. The main difference between the two definition consist in a participants involved in performing the action. Indeed, the definition of action involve a single actor that performs the action that causes an effect, whereas the definition of joint action involve two or more actors performing a set of actions that aims to achieve an effect, i.e. a joint action cannot be performed by a single actor. Of most interest for the OASIS SOA-RA are the joint actions. The OASIS committee models the joint actions at two different levels, one concerning the messages communicated among different entities of the IT system and the other concerning the effects that the actions causes that are relevant for the participants using and offering services. Indeed, the main mechanisms provided to the actors to interact with each other is the exchange of messages and the communication and the interpretation of the content is the foundation of all the interactions within the SOA ecosystem. On the other hand, the content actually communicated is considered by itself as a form of action, said communicative action, and it involves even a semantic aspect of the communication. In a certain sense, a communicative action can be considered as the action performed by the message exchange.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 38 of 191

Communication as Joint Action2 The actions undertaken by participants are performed in a social context that defines their meaning. The embodiment of such social context is modelled as a social structure, and provides a foundation for security, management and government of the SOA-based system.

Realizing Service Oriented Architectures View As defined by OASIS SOA-RA, ―the Realizing Service Oriented Architectures View focuses on the infrastructure elements that are needed in order to support the discovery and interaction with services‖. It aims to answer the key questions about what is a service, how they are realized and what support they needs. This view introduces four models:

− Service Description Model, describing all the aspects requires to inform the participants of the existence of a certain service and the condition under which it can be used − Service Visibility Model, which explore how the visibility can be achieved, provided that the services can interoperate only if the participants are visible to each other − Interacting with Service Model, which explore how to use a service to access capabilities to achieve a certain effect. The interaction with the service is characterized by a sequence of actions usually mediated by exchange of messages − Policies and Contracts Model, which focus on a framework for managing the likely large number of combinations of participants needs and service capabilities.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 39 of 191

Model elements described in the Realizing a Service Oriented Architecture view3 The Service Description is defined as ―an artefact, usually document-based, that defines or references the information needed to use, deploy, manage and otherwise control a service‖. It includes not only a functional description but also policies and contracts associated with a service as well as information to decide if a service is suited to the needs of the consumers. The discussion driven by OASIS on this argument emphasize that do not exist a single description for a service that is appropriate for all contexts and that many descriptions can exists for the same service at the same time. This suggests avoiding multiple copies of descriptions that likely becomes out of synch and thus it is preferable to uses the references to source material.

General Service Description

3 OASIS Reference Architecture Foundation for Service Oriented Architecture, p. 48 NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 40 of 191

Owning Service Oriented Architectures View As defined by OASIS SOA-RA, ―the Owning Service Oriented Architectures View focuses on the issues, requirements and responsibilities involved in owning a SOA-based system‖. This view is particularly relevant in a SOA-based system, because in such systems there exists strong limits on the control and the authority of any party involved. This is mostly true when the system spans multiple ownership domains, but the limitations exist even for systems deployed within an enterprise. This view introduces the following models:

− Governance Model, structured as segmented models each of them presenting a motivation and a measurement of compliance. It would be different by the governance discussed in terms of IT governance, and want to focus on the aspects that describes governance for SOA − Security model, that focuses on the aspects that are relevant for preventing behaviours (occidental or malign) that can damage or compromise trust on the system and the availability of the capabilities of the system − Management model, that focuses on three domains, i.e. i) the management and support of the resources that are involved in the SOA-based system, ii) the promulgation and enforcement of the policies and contracts, and finally iii) the management of the participant-to-participant relationships and participant-to- services that uses relationships − Testing model, that focuses on the demonstration of adequate level of reliability, correctness and effectiveness of the SOA-based system. It faces the challenging needs of accommodating distributed resources, accesses to the system of an unbound consumer population and flexibility to create new solution from existing components over which the developer has little or no control.

Model elements described in the Owning Service Oriented Architectures View

References 1. SOA-RA, http://docs.oasis-open.org/soa-rm/soa-ra/v1.0/soa-ra.html 2. SOA-RM, http://docs.oasis-open.org/soa-rm/v1.0/soa-rm.html

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 41 of 191

3.6 SeCSE Conceptual Model The SeCSE conceptual model [1] is described by means of different diagrams, each offering a view on a specific aspect of the service-oriented system engineering process:

1. Agents and actors 2. Core conceptual model 3. Types of Services 4. Service description 5. Facets 6. Service discovery 7. Service composition 8. SLA Negotiation 9. Service monitoring 10. Service publication

The SeCSE conceptual model contains a UML class-diagram of all relevant concepts including relationships like associations/aggregations etc. and, if applicable, attributes that refine the concepts. This section includes also a textual description of all elements. 3.6.1 Agents and actors This model lists the relevant Agents (entities of the real world) and Actors (roles the real world entities may play) available in the SeCSE context, and the relationships between them. The agents are:

Person, Organization, System, Legacy System Software System.

The actors are:

Service, Service Provider, Service Developer, Service Integrator, Service Intermediary, Service Consumer, Service Monitor, Service Certifier, Testing Authority Negotiation Agent.

Both the classifications describe entities acting in the SeCSE context (i.e., they are Agent- Actor entities). The classification of actors is overlapping, e.g., a Person, an Organization or a System can act as a Service Consumer and a Service Integrator as well, while the

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 42 of 191

classification of agents is disjoint, e.g., a Service Consumer can be either a Person, an Organization, or a System.

Agents and actors

3.6.2 Core conceptual model A Service is an entity identifying a set of offered business functions. A Service can be characterized by one or more Service Descriptions, each offering a different view on the same Service and usually collected in some Service Registry. A Service Description usually corresponds to the Service Specification defined by the Service Developer, but may also include some Service Additional Information (e.g., ratings and measured QoS) provided, e.g., by Service Consumers using that Service or by the Service Certifier certifying some properties of that Service (e.g., trustworthiness) or the results of monitoring activities. Both Service Specification and Service Additional Information are defined by means of Facets which specify some Service Property. A Service may be an Abstract Service or a Concrete Service. An Abstract Service captures the idea of ―business service‖ (an offered service which does not necessarily have a concrete implementation) from the Service Provider perspective and represents a desired service (thus related to a Service Request) from the Service Consumer point of view. An Abstract Service may be published and discovered, just as Concrete Services but before being able to serve a Service Request it has to be ‖implemented‖ by a Concrete Service. A Concrete Service is implemented by a Software System and is able to make available a set of Operations, through which it serves Service Requests expressed by Service Consumers. In order to discover a Service, a Service Consumer expresses a Service Request which generates Queries that can match with zero or more Service Descriptions (see the Service Discovery diagram for details). After being discovered, a Service may serve the Service Request, i.e., it will be used by the Service Consumer that will invoke its Operations.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 43 of 191

Core conceptual model

3.6.3 Types of Services This view shows three possible ways to classify Services. The first classification distinguishes between Abstract and Concrete Services as described after. The second classification regards the possibility of having Composite Services, resulting from the aggregation of other Services, and Simple Services, that is services not formed by other services. The third classification distinguishes Stateless from Stateful Services, being the latter characterized by having a Current State entailing: a) the fact that different executions of the same service operations with the same input at different time may produce different results; and/or b) the fact that some relations exist between the service operations requiring them to be invoked in some specific order. This specialization of Stateful Service corresponds in the diagram to the entity Conversational Service. The three classifications are completely orthogonal and, combined, result in eight possible types of services, all considered relevant and worth of being supported by proper technology. The diagram also shows the concept of Service Composition, of which the Composite Service is a specialization. While a Composite Service is a Service, a Service Composition describes the collaboration between a collection of Services and may not actually correspond to a Service. One of the diagrams of the conceptual model is devoted to describe this concept more in detail.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 44 of 191

Types of Services

3.6.4 Service description This view focuses on the Service Description which represents the overall set of information available for a given Service and through which Services are known by the potential consumers. A Service Description comprises a Service Specification and, if available, some Service Additional Information. A Service Specification is usually defined by the Service Developer and may include both functional and non-functional information such as information on the service interface, the service behaviour, service exceptions, test suites, commercial conditions applying to the service (pricing, policies, SLA negotiation parameters) and communication mechanisms. Service Additional Information is usually defined by actors different from the Service Developer (e.g., by Service Consumers, Service Monitors, or Service Certifiers) and may include information such as user ratings, service certificates, measured QoS and usage history.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 45 of 191

Service description

3.6.5 Facets Both Service Specification and Service Additional Information are specified by means of Facets. Each Facet is the expression of one or more Service Properties in some specification Language. A Service Property can describe a characteristic of a Service or of some of its Operations. Service Properties have associated one or more Quality Metrics that are the basis for measuring their quality characteristics. Facets defined to detail the Service Specification include the following: Signature (defining the syntax to call the Operations of a Service), Exception (defining the exceptions that can be raised by a Service), Service Information (providing a description in natural language of the relevant information about a Service), Operational Semantics (formally describing the semantics of each service Operation in terms of pre and post conditions), Commerce (defining the QoS attributes of the Service), Contractual Condition (providing information about the cost and

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 46 of 191

the policy that apply for a Service), and Testing (providing a list of the test cases that the Service passed before going to production and any other information relevant to testing (oracles, scaffolding, etc.)). Facets related to Service Additional Information include the following: Measured QoS (aiming at storing a measure of QoS properties of a service as verified and provided by a Service Certifier), Rating (storing opinions on the quality attributes of a Service expressed e.g. by Service Consumers based on their experience in using the Service), Certificate (providing a certificate stating some characteristics of a service), Usage History (storing logs of the usage activities of a service).

Facets

3.6.6 Service discovery This view focuses on the Service Request and the process of Service Discovery. A Service Consumer expresses one or more Service Requests in order to discover Concrete Services that can serve its requests and satisfy its needs. Service discovery is usually executed at least in three different moments: 1) when the requirements for a new system are gathered (Early Discovery), 2) when the system is being designed and new specific needs for services are identified (Architecture Time Discovery), or 3) when the system is running and new services need to be discovered to replace the ones that the system is currently using (Run-Time Discovery).

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 47 of 191

Service discovery

3.6.7 Service composition This diagram shows the main concepts related to the composition of services. A Service Composition is a more general concept with respect to a Composite Service, i.e., a Service Composition is not necessarily a (composed) Service. E.g., a volatile Service Composition could not expose an interface. On the contrary, a Service Composition could be ‗wrapped‘ by a Service Specification in order to re-use it in a subsequent and more complex Service Composition. A Service Composition can be described both from an external and an internal viewpoint. From an external viewpoint, a Service Composition complies with some Composition Architectural Style, which defines the roles its services can accordingly play, and can be described by a Service Interaction (also known as choreography). The Service Interaction entity describes the global roles involved in a Service Composition and the Interaction Model existing between them. E.g., in case of a B2B interaction between two enterprises, this view would describe the role assumed by the first enterprise, the role assumed by the second one, and the interaction protocols between them. This view could be useful in case of a peer-to-peer based composition, as in the previous B2B example: each enterprise would be represented by a service and the two services would communicate with each other in a peer-to-peer fashion. The internal viewpoint of a composition is called Service Process (also known as orchestration). Considering again the previous B2B example, each of the services offered by the two enterprises could be implemented as a process-based composition. Thus, each Process could be described by a service process view. This view describes processes which are composed of activities (or tasks). An Activity represents a functionality a Process has to accomplish in order to achieve its final business goal. The execution of a Composition may

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 48 of 191

require the creation of a Transaction that instantiates some Transaction Pattern. In this sense Activities are atomic steps for which some property applies, e.g., ACID). A Policy associated with a transaction is a collection of assertions that declare the semantics of the transaction itself (e.g., ACID or long-running, participants, coordination protocol, transaction faults and corresponding actions to be performed, etc.).

Service composition

3.6.8 SLA Negotiation This diagram focuses on the entities and the activities characterizing the process of Service Level Agreement (SLA) Negotiation. The negotiation process consists of two or more Negotiation Agents, each acting on behalf of a Service Provider or a Service Consumer, formulating, exchanging and evaluating a number of SLA Proposals in order to reach a SLA Contract for the provision/consumption of a service. A SLA Proposal can be a SLA Offer or a SLA Request that a Negotiation Agent formulates enacting a certain Strategy. Either cases, a SLA Proposal is an instance of a SLA Template and complies with a Negotiation Protocol, both specified by the Commerce Facet included in the Service Description. A SLA Proposal specifies negotiation values for a number of Service Properties, such as QoS attributes. When the negotiation process leads to an agreement between the involved parties, a SLA Contract enclosing the agreed SLA Proposal is subscribed between these subjects.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 49 of 191

SLA Negotiation

3.6.9 Service monitoring A Service can be monitored if the software system that features it offers some mechanisms, i.e., the Monitoring Sockets. A Monitoring Socket is able to produce Monitoring Data that are then checked by some Monitoring Rule, to verify some Monitored Constraints expressed over one or more Quality Metrics. These, in turn, express measures of some Service Property. A Monitoring Constraint can also trigger proper Recovery Actions when the constraint is not fulfilled. Service Properties can refer both to an entire service (e.g., Mean Time Between Failures (MTBF) of 1h per week) or to one or more

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 50 of 191

operations offered by the service itself (e.g., the operation X has to feature some transactional property). QoS Characteristics represent a particular type of Service Property related to the QoS of a service or of its operation. Monitoring Data can be collected in a History. In some cases the Monitored Constraints check an entire history rather than a single datum. Service monitoring is performed by a Service Monitor. The kind of properties involved in a Monitored Constraint usually depends on the actual agent that performs the monitoring and on its visibility on the service execution.

Service monitoring

3.6.10 Service publication This diagram highlights two specific n-ary associations and the entities involved: publication and discovery. As for publication, a Service Provider publishes one or more Service Descriptions on a Service Registry. As for discovery, a Service Consumer queries a Service Registry based on proper Service Requests. Service Registries can be organized in Federated Service Registries resulting from an agreement made by organizations running service registries to achieve a joint aim (e.g., being focused on a similar topic, having some trust relationship, etc.). Federations can be used to propagate information (e.g., service requests or service publications) among different registries.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 51 of 191

Service publication

3.6.11 SeCSE methodology At the highest level the SeCSE methodology is represented by four important functional areas:

1. Service-Centric System Engineering functional area. This represents the core of the SeCSE project and most of the processes that are part of this area are supported by SeCSE tools or methods. This functional area comprises those Service-Centric Systems (SCS) processes, which provide specific support for the analysis, design, development, deployment and running of a software system based on services. It is related to, and dependent upon, the following two functional areas. 2. Service Engineering functional area. The focus here is on developing, testing and delivery atomic services processes. In particular attention is paid here on the ―primitives‖ necessary to make Service centric System Engineering occur. It should be noted that the SeCSE project addresses only the subset of this functional area that is integral to support the Service-centric System Engineering one. 3. Service Acquisition/Provisioning functional area. This area contains processes concerned with the provisioning and acquiring services: the service marketplace. As with Service Engineering, the SeCSE project addresses only the subset of this functional area that is integral to the support of Service-Centric System Engineering.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 52 of 191

4. Validation/Verification (V&V) is a cross-cutting functional area that verifies the validity of the outcomes of other processes coming from the functional areas above (e.g. the service architecture validation). V&V is not considered a functional area by its own, but we identify it here to explain its importance to the functional area structure.

Figure below is a UML-style representation of the SeCSE methodology ecosystem just described. It identifies the main actors and roles, the main processes (as use cases) and the links representing interaction relationships among actors/roles and processes. In particular, the circular arrow in the middle of this figure signifies that the various processes can be executed sequentially starting by building the atomic services, provisioning them, and composing them into a service-centric system, which in turn can potentially lead to a higher level service being offered in the marketplace.

The SeCSE ecosystem view

This model provides a common understanding about the main actors, entities, and artefact that are involved in the creation of a service-centric system. The described model does not consider an integration with other existing models, the handling of SLAs could be extended in respect to explicitly run-time check its fulfilment using Service monitoring.

References

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 53 of 191

1. SeCSE conceptual model,http://secse.eng.it/wp-content/uploads/a5d93- secse_conceptual_model_v4.pdf

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 54 of 191

4 SURVEY OF RELEVANT STANDARDS, INITIATIVES AND PRODUCTS

4.1 Introduction This section addresses the state of art of standards, initiatives and products related to the SOA world. It is structured in two sections, the former collects the censed standards and initiatives, the latter collects products that are widely used in common SOA scenarios.

4.2 Standards and initiatives

4.2.1 Alert Standard Format (ASF). DMTF Alerting technologies provide advance warning and system failure indication from managed clients to remote management consoles. Initial generations of this technology — like the IBM/Intel Alert on LAN (AoL) implementations — provided remote notification of client system states and hardware or software failures without regard to operating system or system power state. The Intelligent Platform Management Interface initiative, led by Intel and others, subsequently provided an open alert interface: the Platform Event Trap. Management console providers and system OEMs were faced with the possibility of supporting multiple alerting interfaces. Once a system alert provides its warning or error report, the next step in remote system manageability is to allow corrective action to be taken — these actions include the ability to remotely reset or power-on or -off the client system. When the system is in an OS-present state, these actions can be provided by Common Information Model (CIM) interfaces that interact with the local system and provide orderly shutdown capabilities. This specification provides similar functionality when the system is in an OS-absent state, as added by the second generation of the IBM/Intel AoL technologies. The principal goal of this specification[1] from the Distributed Management Task Force (DMTF) is to define standards-based interfaces with which vendors of alerting and corrective-action offerings can implement products and ensure interoperability. These vendors include:

Add-in card suppliers SMBus sensor suppliers Communication controller suppliers System vendors Operating system vendors, with a primary focus on operating systems which are ACPIaware. Management application vendors

The standards-based protocols (e.g. SNMP, UDP) upon which this specification‘s interfaces are built are lightweight, bit-based information carriers since this specification anticipates that the majority of the ASF client implementation will be hardware and/or firmware based. CIM-based configuration methods can provide the abstraction layer between OS-present XML implementations and ASF-defined low-level primitives.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 55 of 191

References

1. Specification, http://www.dmtf.org/standards/documents/ASF/DSP0136.pdf

4.2.2 Availability Management for Java. JCP This is another specification included in the Java Community Process (JCP)[1]. The purpose of this recently created group of experts (Oct 2007) is to enable availability frameworks (frameworks that coordinate redundant resources within a cluster to deliver a system with no single point of failure) to supervise and to control Java runtime units in a standardized way (using a well defined API). The API has the following goals:

It shall not specify the availability management framework itself but it shall only specify the means by which the framework can supervise and control the Java units within a JVM. The means by which the framework instantiates JVMs and communicates with the JVMs are outside the scope of the specification. The specification will define the local interactions within one JVM only. It shall allow different service providers to provide support for specific availability frameworks, standardized or proprietary. It is required that a service provider for the standardized AMF (Availability Management Framework) of SA Forum shall be feasible. It shall be designed with Java EE as the main target, although parts of the specification will also be useful on Java SE. The specification has to consider the constraints set by the component models of Java EE. It shall specify a basic set of features that can be considered as useful for Java EE and possible to support by most availability management frameworks. This implies that only a subset of the features of AMF will be supported. It shall support Java EE applications that are not at all, to some extent or completely aware of the control of the availability management framework. It is anticipated that the main part of the specification is implemented in the Java EE server and that existing Java EE applications can take advantage of the availability support without any changes. It shall not handle all aspects of clustered Java systems. Especially it shall not specify any state replication solution, although it may specify that the API can give hints to such replication solutions in the form of reasons for the activation or the deactivation of a unit.

Notes: Since the committee has been recently created, no documentation has been provided yet.

References

1. JCP Home, http://jcp.org/en/jsr/detail?id=319

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 56 of 191

4.2.3 Asynchronous Service Access Protocol (ASAP). OASIS The main purpose of ASAP is to create a very simple extension of the SOAP protocol to enable generic asynchronous Web Services and long-running Web Services and making them easy to implement and connect to. SOAP is a request/reply protocol. ASAP is an asynchronous protocol to allow the monitoring control and development of Web Services that have long response times (e.g. a service that includes some human task in workflow). This will bring the possibility of a complete new type of Web Services. With this kind of protocol it could be possible to be notified when flight XX123 has landed, because the response will come back as soon as possible, but not immediately. Or if the Web service is on a mobile wireless device and it is off-line, the response can only occur when the device is connected again. The protocol has been designed with the following things in mind:

The protocol should not reinvent anything unnecessarily. If a suitable standard exists, it should be used rather than re-implement in a different way. The protocol should be consistent with XML Protocol and SOAP. This protocol should be easy to incorporate into other SOAP-based protocols that require asynchronous communication. The protocol should be the minimal necessary to support a generic asynchronous service. This means being able to start, monitor, exchange data with, and control a generic asynchronous service on a different system. The protocol must be extensible. The first version will define a very minimal set of functionality. Yet a system must be able to extend the capability to fit the needs of a particular requirement, such that high level functionality can be communicated which gracefully degrades to interoperate with systems that do not handle those extensions. Like other Internet protocols, ASAP should not require or make any assumptions about the platform or the technology used to implement the generic asynchronous service. Terseness of expression is not a goal of this protocol. Ease of generating, understanding and parsing should be favored over compactness

References

1. OASIS ASAP Technical Committee, http://www.oasis- open.org/committees/tc_home.php?wg_abbrev=asap 2. ASAP 1.0 Specification, http://www.oasis- open.org/committees/download.php/14210/wd-asap-spec-02e.doc

Other Links

EasyASAP, http://sourceforge.net/projects/easyasap/ - A C++ Implementation, AxisASAP, http://sourceforge.net/projects/axisasap/ - A Java Implementation, Wf-XML, http://www.wfmc.org/standards/wfxml.htm - A protocol on top of ASAP which extends the basic generic service to include some workflow functionality NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 57 of 191

4.2.4 BACnet. ASHRAE BACnet[1], the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) building automation and control networking protocol, has been designed specifically to meet the communication needs of building automation and control systems for applications such as heating, ventilating, and air-conditioning control, lighting control, access control, and fire detection systems and their associated equipment. The BACnet protocol provides mechanisms by which computerized equipment of arbitrary function may exchange information, regardless of the particular building service it performs. As a result, the BACnet protocol may be used by head-end computers, general-purpose direct digital controllers, and application specific or unitary controllers with equal effect. There are several open-source implementations:

Open Source BACnet Protocol Stack[2] VTS - BACnet Test Software [3]

References

1. Home, http://www.bacnet.org/ 2. Open Source BACnet Protocol Stack, http://bacnet.sourceforge.net/ 3. VTS - BACnet Test Software, http://sourceforge.net/projects/vts/

4.2.5 BPEL4People. OASIS WS-BPEL Extension for People (BPEL4PEOPLE) is an extension on BPEL to describe the interactions between BPEL processes and human participants in the same business process, since in most of real-world business process, human intervention is required. This specification was conceived by IBM and SAP, while formal specifications have been published by an extended consortium of companies. Two specifications has been provided: BPEL4People and WS-Human Task. WS-Human Task specification allows to define human tasks (as a service), notifications, supporting operations and life cycle. BPEL4People specification extends BPEL to support human interactions by accessing to those human task defined as services by the WS-Human Task specification. BPEL4People uses the following technologies: BEPL 2.0, WS-HumanTask 1.0, WSDL 1.1, XSD 1.0, Xpath 1.0. BPEL4People uses BPEL extensibility mechanism , WS-Human Task namespace ―htd‖ and defines the namespace. BPEL4People activities can be introduce within a BPEL process as inline tasks, using element or as standalone tasks defined by the WS-Human Task specialization. Inline notifications are specified using the element . BPEL4People extends human roles defined in WS-Human Task by adding three generic human roles: process initiator, process stakeholders, business administrators, which are assigned in element through three possible mechanisms: via logical people groups, via literals, via expressions. BPEL4People defines ―people activities‖ which are the basic element to integrate human

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 58 of 191

interactions (task and notifications) with BPEL processes. Ways to integrate those human interactions with BPEL processes are depicted in the following picture:

Pattern 1 describe a inline human task as part of a people activity, which limit it to that activity. Pattern 2 describe a top level inline human task accessed from a people activity, however, that human task can be accessed also from other people activities, which fosters reuse. Pattern 3 is similar to pattern 2, but the definition of the human task is performed out of the BPEL process and it is implementation specific. Pattern 4 differs from 3 since it uses WS callable mechanism compliant with WS standards. ActiveBPEL is a current OSS implementation of BPEL4People and WS-Human Task. References: 1. OASIS BPEL4People, https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/30c6f5b5-ef02- 2a10-c8b5-cc1147f4d58c 2. WS-Human task, https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/a0c9ce4c-ee02- 2a10-4b96-cb205464aa02

Other links: BPEL4People project, https://berlin.vitalab.tuwien.ac.at/prototypes/bpel4people/

4.2.6 Business Process Execution Language (BPEL4WS 1.1/2.0). OASIS Business Process Execution Language (BPEL) is an XML language specification conceived to describe business processes in a way suitable for being executed. BPEL 1.1 specification was originally conceived by IBM after merging previous business process specification languages: WSFL (IBM) and XLANG (Microsoft). Current BPEL 2.0 specification is an OASIS standard. BPEL based business process constructions follow an orchestration pattern and leverage on existing Web Services (WS) technologies, mainly WSDL. BPEL focus on ―Programming NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 59 of 191

on the large‖ paradigm, to support long lasting business processes. BPEL4WS is seem in BPM community as an orchestration language for WS, but not a BPM language.

BPEL 1.1 Features BPEL provides constructs to create processes by aggregating WS following an orchestration pattern, defining an endpoint to access the orchestration and a central viewpoint (director). BPEL make extensive usage of WS invocations, supporting both synchronous and asynchronous (to support long lasting processes) invocations. BPEL defines a process as a sequence of steps, called activities, which are structured in primitives and structured activities. Primitive activities are:

WS invocation Waiting WS response reception Data variable manipulation Faults and exception management, and so on.

Primitive activities can be aggregated to build structured activities using some workflow constructs:

Activities performed in ordered sequence Activities performed in parallel flow Branched activities Loops Alternatives and so on

BPEL 2.0 Extensions. BPEL 2.0 extends BPEL 1.1 by adding some new features: new control flow constructions: repeatUntil, validate, forEach, rethrow, etc.

Some BPEL v1.1 constructs rename Supported termination handler to manage explicit termination behavior. Variable initialisation. Variable transformation through XSLT Variable data access through XPATH, others

BPEL vs BPMN BPEL doesn't provide graphical notation, therefore, each vendor of graphical BPEL editors provides each particular BPEL graphical representation. Since most of BPEL constructions follow a block-structured pattern, some vendors have followed graphical representations of BPEL as structograms. Others use BPMN to represent BPEL processes and indeed there is some mappings connecting BPEL 1.1 and BPMN (i.e. BPMN2BPEL) , although they have to cope with the

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 60 of 191

fundamental differences between both languages, which makes difficult direct and reverse engineering between both specifications. References

1. BPEL 2.0, http://docs.oasis-open.org/wsbpel/2.0/OS/wsbpel-v2.0-OS.html 2. BPEL 1.1, http://www-128.ibm.com/developerworks/library/specification/ws-bpel/

Other links:

OASIS WSBPEL committee, http://www.oasis- open.org/committees/tc_home.php?wg_abbrev=wsbpel BPMN2BPEL: http://www.bpm.fit.qut.edu.au/projects/babel/tools/ M.B. Juric et al. Business Process Execution Language. Packt Publishing Ltd. 2004.

4.2.7 Business Process Definition Metamodel (BPDM). OMG Business Process Definition Metamodel (BPDM) is an OMG RFP initiative to build a meta- model comprising those elements required to define business processes executed in one enterprise and their collaboration with other business units pertaining to the same enterprise or across external enterprises. BPMN aims at improving the business process modeling communication between business process modelers and software process modelers, facilitating the creation and adoption of more specialized tools for the analysis and design of business processes. BPDM rely on XMI specification to exchange models, in a similar way UML does.

Other links: BPDM RFP, http://www.omg.org/cgi-bin/doc?bei/03-01-06 OMG BPDM (Business Process Definition Metamodel), http://www.omg.org/cgi- bin/doc?bei/03-01-06

4.2.8 Business Process Management Language BPML 1.0 Business Process Modelling Language (BPML) is an BPMI initiative to specify an XML meta-language for the description of business processes. BPML is a superset of BPEL offering a complete language to specify real-world complete business processes, but is not any more supported since BPMI has been acquired by OMG, which encumbrances BPEL4WS.

Other links: BPML: http://www.ebpml.org/bpml.htm Intalio's open BPMS: http://www.intalio.com/

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 61 of 191

NOTE: Since BPML is not longer supported by BPMI (OMG) we have to consider to drop this initiative from the survey of BPM standards and technologies.

4.2.9 Business Process Modeling Language BPMN 1.1. OMG Business Process Modeling Notation (BPMN) is a OMG standard specification of a graphical notation to represent business processes as workflows. BPMN was initially specified by Business Process Management Initiative (BPMI) but now belongs to OMG since both organization merged. The primary goal of this graphical notation is to represent business processes in a way easily understandable by main business process participants: stakeholders, business analysts, technical developers, business managers, etc. which have quite a different technical background. BPMN representations are suitable to be automatically translated into business process executable languages such as BPEL or XPDL, which facilitates business process instantiation. Current BPMN 1.1 will be supersede by 2.0 version currently as a working proposal. BPMN is a graphical notation to describe a Business Process Diagram (BPD), a graphical representation of a Business Process Model, which represents Business Process using workflow (flowchart) techniques that connect activities or tasks with flow controls. BPMN fosters simplification of BPM and copes with BPM complexity by grouping their graphical elements into four categories: flow objects, connecting objects, swimlanes, artifacts. Flow objects are: event, activity, gateway. They are the basic elements to describe BP. Events can be start, intermediate, end: are used to signal time-particular occurrences during the BP flow. Activities describes particular tasks or work performed during the BP. Gateways are flow control mechanisms like decision, fork, merge, join points. Connecting objects connect flow objects to create the basic pattern for the BP. Connecting objects are: sequence flow, message flow and association. Sequence flow is used to determine the order that activities are performed. Message flow represents the flow of messages exchanged between process participants. Association is used to link artifacts (i.e. inputs and outputs) with flow objects (i.e. activities). Swinlanes are used to organize the BPD according with different functional responsibilities. Swinlanes comprises pools, used to represent BP participants and lanes, used to categorize activities within a pool. Each pool represents a separate BP, so sequence flows shouldn't be used to connect flow objects within separate pools, but message flows. Artifacts are used, for instance, to describe information provided or consumed by activities, or to complement BPD with further descriptive information. Supported artifacts are: data objects, groups and annotations. Below diagram shows an example of BPMN BPD:

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 62 of 191

OSS BPM Editors supporting BPMN are: Eclipse STP BPMN Diagram Editor: http://www.eclipse.org/stp/bpmn/

References: 1. BPMN 1.1: http://www.omg.org/spec/BPMN/1.1/PDF/ 2. BPMN 2.0 RFP: http://www.omg.org/cgi-bin/doc?bmi/2007-6-5

Other links: 1. BPMN Information home page: http://www.bpmn.org/

4.2.10 Business Transaction Protocol (BTP). OASIS BTP was the first attempt to define a coordination protocol for Web Service-based transactional applications. It was included in the OASIS consortium to define a standard specification. BTP is designed to allow coordination of application work between multiple participants owned or controlled by autonomous organizations. The BTP is an interoperation protocol which defines the roles which software agents (actors) may occupy, the messages that pass between such actors, and the obligations upon and commitments made by actors-in- roles. It does not define the programming interfaces to be used by application programmers to stimulate message flow or associated state changes. BTP‘s ability to coordinate between services offered by autonomous organizations makes it ideally suited for use in a Web Services environment. For this reason this specification defines communications protocol bindings which target the emerging Web Services arena, while preserving the capacity to carry BTP messages over other communication protocols.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 63 of 191

Protocol message structure and content constraints are schematized in XML, and message content is encoded in XML instances. The BTP is based on a permissive and minimal approach, where constraints on implementation choices are avoided. The protocol also tries to avoid unnecessary dependencies on other standards, with the aim of lowering the hurdle to implementation. Each system that participates in a business transaction can be thought of as having two elements—an application element and a BTP element (See figure below). The application elements exchange messages to accomplish the business function. When Foo Bank‘s bill payment service sends a message to the check writing service with details of the payee‘s name, address, and payment amount, the application elements of the two services are exchanging a message. The BTP elements of the two services also exchange messages that help compose, control, and coordinate a reliable outcome for the message sent between the application elements.

BTP uses a two-phase (2PC) outcome coordination protocol to ensure the overall application achieves a consistent result. BTP permits the consistent outcome to be defined a priori -all the work is confirmed or none is- (an atomic business transaction or atom) or for application intervention into the selection of the work to be confirmed (a cohesive business transaction or cohesion).

Atomic Business Transactions, or atoms: These are like traditional transactions, with a relaxed isolation property. Cohesive Business Transactions, or cohesions: These are transactions where both isolation and atomicity properties are relaxed.

The BTP allows great flexibility in the implementation of business transaction participants. Such participants enable the consistent reversal of the effects of atoms. BTP participants may use recorded before- or after-images, or compensation operations to provide the

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 64 of 191

―roll-forward, roll-back‖ capacity which enables their subordination to the overall outcome of an atomic business transaction. References

1. OASIS Business Transactions Technical Committee, http://www.oasis- open.org/committees/tc_home.php?wg_abbrev=business-transaction 2. OASIS BTP Specification, http://www.oasis- open.org/committees/download.php/1184/2002-06-03.BTP_cttee_spec_1.0.pdf

Other links

JOTM-BTP: An open-source implementation of the BTP protocol, http://jotm.objectweb.org/jotm-btp.html

Notes

BTP was superseded by WS-CAF.

4.2.11 CC/PP – UAProf. W3C A CC/PP profile [1] is a description of device capabilities and user preferences. This is often referred to as a device's delivery context and can be used to guide the adaptation of content presented to that device. CC/PP vocabulary is identifiers (URIs) used to refer to specific capabilities and preferences, and covers:

the types of values to which CC/PP attributes may refer, an appendix describing how to introduce new vocabularies, an appendix giving an example small client vocabulary covering print and display capabilities, and an appendix providing a survey of existing work from which new vocabularies may be derived.

References

1. Composite Capability/Preference Profiles (CC/PP): Structure and Vocabularies 1.0, http://www.w3.org/TR/2004/REC-CCPP-struct-vocab-20040115/

4.2.12 Common Diagnostic Model (CDM). DMTF The CDM[1] is an architecture defined by the Distributed Management Task Force and methodology for exposing system diagnostic instrumentation through the CIM standard interfaces. Standardization of these interfaces means that clients, providers, and tests gain a certain degree of portability and, in many cases, need only be written once to satisfy

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 65 of 191

multiple environments and platforms. OEMs can differentiate their diagnostic offerings by how effectively their applications use the information and capabilities available through CIM to maintain and service their systems. CDM is widely used within the industry to evaluate the health of computer systems in multi-vendor environments. CDM creates diagnostic instrumentation that can be utilized by platform management applications, and its tight synergy with the other manageability domains in CIM further enables integration of diagnostics into critical management functions. CDM allows Diagnostic Client applications to:

discover, configure and execute diagnostic tests view progress and control test execution view and manage test execution results

References

1. Specification, http://www.dmtf.org/interoperability/CDM_Forum/

4.2.13 Common Information Model (CIM). DMTF CIM[1] it is a Distributed Management Task Force specification that defines how managed resources in a SOI can be described (using an object-oriented model) as a common set of objects and relationships between them. Basically, CIM provides a common definition schema of management information for systems, networks, applications and services, and allows for vendor extensions. CIM's common definitions enable vendors to exchange semantically rich management information between systems throughout the network. CIM is being developed in a number of Workgroups in DMTF. Particularly relevant here are: Database, Desktop and Mobile, Networks, Server Management, System Virtualization, Partitioning, and Clustering, Telecom, Utility Computing. CIM is composed of a Specification [2] and a CIM Schema[3]. The Specification defines the details for integration with other management models, while the Schema provides the actual model descriptions. Management schemas are the building-blocks for management platforms and management applications, such as device configuration, performance management, and change management. CIM structures the managed environment as a collection of interrelated systems, each composed of discrete elements. CIM supplies a set of classes with properties and associations that provide a well-understood conceptual framework to organize the information about the managed environment. We assume a thorough knowledge of CIM by any programmer writing code to operate against the object schema or by any schema designer intending to put new information into the managed environment. CIM is structured into these distinct layers:

Core model — It is an information model that applies to all areas of management. It is composed by a small set of classes, associations, and properties for analyzing

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 66 of 191

and describing managed systems. It can be used as starting-point for analyzing how to extend the common schema. While classes can be added to the core model over time, major re-interpretations of the core model classes are not anticipated. Common model — It is an information model common to particular management areas but independent of a particular technology or implementation. The common areas are systems, applications, networks, and devices. The information model is specific enough to provide a basis for developing management applications. This schema provides a set of base classes for extension into the area of technology- specific schemas. The core and common models together are referred as the CIM Schema. Extension schemas — technology-specific extensions of the common model that are specific to environments, such as operating systems (for example, UNIX, or Microsoft Windows). The common model is expected to evolve as objects are promoted and properties are defined in the extension schemas.

Because CIM is not bound to a particular implementation, it can be used to exchange management information in a variety of ways. Four of these ways are illustrated in the following figure:

The constructs defined in the model are stored in a database repository. These constructs are not instances of the object, relationship, and so on. Rather, they are definitions to establish objects and relationships. The meta model used by CIM is stored in a repository that becomes a representation of the meta model. The constructs of the meta-model are mapped into the physical schema of the targeted repository. Then the repository is populated with the classes and properties expressed in the core model, common model, and extension schemas.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 67 of 191

For an application DBMS, the CIM is mapped into the physical schema of a targeted DBMS (for example, relational). The information stored in the database consists of actual instances of the constructs. Applications can exchange information when they have access to a common DBMS and the mapping is predictable. For application objects, the CIM is used to create a set of application objects in a particular language. Applications can exchange information when they can bind to the application objects. For exchange parameters, the CIM — expressed in some agreed syntax — is a neutral form to exchange management information through a standard set of object APIs. The exchange occurs through a direct set of API calls or through exchange- oriented APIs that can create the appropriate object in the local implementation technology.

There are several open-source implementations such as SBLIM[4] and OpenDRIM[5].

References

1. Home, http://www.dmtf.org/standards/cim/ 2. Specification v2.4, http://www.dmtf.org/standards/published_documents/DSP0004_2.4.0.pdf 3. Schema v2.18, http://www.dmtf.org/standards/cim/cim_schema_v218 4. SBLIM, http://sourceforge.net/projects/sblim 5. OpenDRIM, http://www.opendrim.org/

4.2.14 Common Management Information Protocol (CMIP)/Common Management Information Service (CMIS). ITU CMIP[1] is a protocol for network management defined by the International Telecommunications Union (ITU) that provides an implementation for the services defined by CMIS, allowing communication between network management applications and management agents. CMIS[2] is a service defined by the International Telecommunications Union (ITU) that may be employed by network elements for network management. It defines the service interface that is implemented by the Common management information protocol (CMIP). CMIP/CMIS is part of the Open Systems Interconnection (OSI) body of network standards. The following picture presents a high-level picture of a network management system based on CMIP/CMIS:

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 68 of 191

CMIP is a well designed protocol that defines how network management information is exchanged between network management applications and management agents. It uses an ISO reliable connection-oriented transport mechanism and has built in security that supports access control, authorization and security logs. The management information is exchanged between the network management application and management agents thru managed objects. Managed objects are a characteristic of a managed device that can be monitored, modified or controlled and can be used to perform tasks. The network management application can initiate transactions with management agents using the following operations:

Management operation services o M-ACTION - Request an action to occur as defined by the managed object o M-CANCEL_GET - Cancel an outstanding GET request o M-CREATE - Create an instance of a managed object o M-DELETE - Delete an instance of a managed object o M-GET - Request the value of a managed object instance o M-SET - Set the value of a managed object instance Management notification services o M-EVENT-REPORT - Send events occurring on managed objects NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 69 of 191

Management association services To transfer management information between open systems using CMIS/CMIP, peer connections, i.e., associations, must be established. This requires the establishment of an Application layer association, a Session layer connection, a Transport layer connection, and, depending on supporting communications technology, Network layer and Link layer connections. CMIS initially defined management association services but it was later decided these services could be provided by ACSE and these services were removed.

A management agent can initiate a transaction with the network management application using the M-EVENT_REPORT operation. This operation can be used to send notifications or alarms to the network management application based upon predetermined conditions set by the network management application using the M-ACTION operation. CMIP does not specify the functionality of the network management application, it only defines the information exchange mechanism of the managed objects and not how the information is to be used or interpreted. CMIP was designed in competition with SNMP, and has far more features than SNMP. CMIP was to be a key part of the Telecommunications Management Network vision, and was to enable cross-organizational as well as cross-vendor network management. CMIP is supported mainly by telecommunication devices. On the Internet, however, most TCP/IP devices support SNMP and not CMIP. This is because of the complexity and resource requirements of CMIP agents and management systems. The major advantages of CMIP over SNMP are:

CMIP variables not only relay information, but also can be used to perform tasks. This is impossible under SNMP CMIP is a safer system as it has built in security that supports authorization, access control, and security logs CMIP provides powerful capabilities that allow management applications to accomplish more with a single request CMIP provides better reporting of unusual network conditions.

References

1. CMIP Specification, http://www.itu.int/rec/T-REC- X/recommendation.asp?lang=en&parent=T- REC-X.711 2. CMIS Specification, http://www.itu.int/rec/T-REC- X/recommendation.asp?lang=en&parent=T- REC-X.710

4.2.15 Common raid Disk Data Format (DDF). SNIA Especially when administering big systems, system administrators would wish to change easily the internal RAID solutions they are using. The best method would be to move the

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 70 of 191

disks with data-in-place from one RAID implementation to another. Unfortunately, the different methods for storing configuration information prohibit data-in-place migration between systems from different storage vendors. DDF[1][2] is a specification from the Storage Networking Industry Association (SNIA). It provides requirements for the RAID configuration Disk Data Format (DDF) structure stored on physical disks attached to RAID controllers. This DDF structure allows storing RAID configuration information on physical disks by different vendor implementations in a common format so the user data on physical disks is accessible independent of the RAID controller being used. Controllers are not required to store this information in the same format in their internal memory. This way, the Disk Data Format (DDF) structure allows a basic level of interoperability between different suppliers of RAID technology. The Common RAID DDF structure benefits storage users by enabling data-in-place migration among systems from different vendors. The scope of the DDF is limited to the interface between a block aggregation implementation and storage devices. The DDF is stored as data structures on the storage devices (See Figure below).

Moreover, the DDF data structure also allows RAID groups with vendor unique RAID formats. While vendor unique RAID formats prohibit data-in-place migration between vendors, the Common RAID DDF will be a benefit in these situations. At a minimum, when a RAID group containing a unique format is moved to a different RAID solution that does not support the format, the new system will still be able to read the DDF structure. It can identify the RAID groups containing the unique format and notify the system administrator that these RAID groups contain valid data that is not accessible by the current storage NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 71 of 191

system. Potential data loss is prevented because the new RAID system will not overwrite the data without administrator confirmation.

References

1. Home, http://www.snia.org/tech_activities/standards/curr_standards/ddf/ 2. Specification v1.2, http://www.snia.org/tech_activities/standards/curr_standards/ddf/SNIA- DDFv1.2_with_Errata_A_Applied.pdf

4.2.16 Configuration Description, Deployment and Lifecycle Management Component Model Specification (1.0). CDDLM-WG The Component Model[6] specifies the requirements on CDDLM-deployable software components or deployment objects. It defines the interfaces the objects must provide in order to be managed by the CDDLM deployment system, and it defines the capabilities offered to running objects by the deployment runtime. It should be noted that CDDLM deployment objects are typically management wrappers for the actual functional components – for example, a management wrapper for a web server would know how to configure, start and stop the web server. The CDDLM Component Model outlines the requirements for creating a deployment object responsible for the lifecycle of a deployed resource. Each deployment object is defined using the CDL language and mapped to its implementation. The deployment object provides a WS-Resource Framework (WSRF) compliant "Component Endpoint" for lifecycle operations on the managed resource. The model also defines the rules for managing the interaction of objects with the CDDLM Deployment API in order to provide an aggregate, controllable lifecycle and the operations which enable this process.

References

1. Specification, http://www.w3.org/Submission/WS-Policy/ 2. CDDLM Working Group, http://www.ogf.org/gf/group_info/view.php?group=cddlm- wg 3. Configuration Description, Deployment and Lifecycle Management SmartFrog- based Specification, http://www.ogf.org/documents/GFD.51.pdf 4. Configuration Description, Deployment and Lifecycle Management XML-based Specification, http://www.ogf.org/documents/GFD.85.pdf 5. Configuration Description, Deployment API Specification, http://www.ogf.org/documents/GFD.69.pdf 6. Configuration Description, Component Model Specification, http://www.ogf.org/documents/GFD.65.pdf

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 72 of 191

4.2.17 Configuration Description, Deployment and Lifecycle Management Deployment API (1.0). CDDLM-WG The CDDLM framework needs a deployment API[5] for callers to deploy services described in the component model languages. The deployment API is based on the WS-Resource Framework (WS-RF) SOAP API. Application descriptions made with the previous specifications are passed to the Grid resources via a Deployment API. The Deployment API takes in descriptions and realizes the required systems by interacting with the Grid resources to install, configure, start and manage the required software components. It also allows management of deployed systems, including termination. This API must support remote access for deployment, terminating existing deployed systems, and for probing their state. The deployment API allows deploying applications to one or more target computers. Every set of computers to which systems can be deployed hosts one or more "Portal Endpoints", WS-RF resources which provide a means to create new "System Endpoints". A System Endpoint represents a deployed system. The caller can upload files to it, and then can submit a deployment descriptor for deployment. A System Endpoint is effectively a component in terms of the Component Model specification -it implements the properties and operations defined in that document. It also adds the ability to resolve references within the deployed system, enabling remote callers to examine the state of components with it.

References

7. Specification, http://www.w3.org/Submission/WS-Policy/ 8. CDDLM Working Group, http://www.ogf.org/gf/group_info/view.php?group=cddlm- wg 9. Configuration Description, Deployment and Lifecycle Management SmartFrog- based Specification, http://www.ogf.org/documents/GFD.51.pdf 10. Configuration Description, Deployment and Lifecycle Management XML-based Specification, http://www.ogf.org/documents/GFD.85.pdf 11. Configuration Description, Deployment API Specification, http://www.ogf.org/documents/GFD.69.pdf 12. Configuration Description, Component Model Specification, http://www.ogf.org/documents/GFD.65.pdf

4.2.18 Configuration Description, Deployment and Lifecycle Management SmartFrog- Based Language Specification (1.0) and CDDML Configuration Description Language Specification (1.0). CDDLM-WG Successful realization of the Grid vision of a broadly applicable and adopted framework for distributed system integration, virtualization, and management requires the support for configuring Grid services, their deployment, and managing their lifecycle. Imagine that you have acquired the use of several Grid servers through some Grid resource allocation mechanism and you now wish to deploy a multi-tier web application on NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 73 of 191

these machines. Your complete application could consist of a database tier running on one server, an application server tier running on two servers configured for failover, and a web server tier running on a variable number of servers depending on workload. Assume also that since you will need to relinquish the Grid resources periodically, you want to automate the process of deploying the complete application, and removing it cleanly when done. In order to to so, a language in which to describe the components and systems that are required is needed. CDDLM-WG defined a Configuration Description Language that is used to describe the desired application. It allows you to specify which software application components (e.g., web servers) are to be deployed, onto which resources and in which order. It allows you to specify the individual configuration parameters for each software component and to link configuration parameters across the set of components that comprise the application. Description files can be reused and employed as templates for other deployments. There are two description language variants:

The first one[3] is based on the language from the SmartFrog system [1]. The other, called CDL[4], developed specifically for CDDLM and based on XML.

The first specification addresses the set of description challenges including how to represent the full range of service and resource elements, how to support service "templates", service composition, correctness checking... and provides a definition of the CDDLM language that is based on the SmartFrog (Smart Framework for Object Groups) and its requirements. The CDDLM Configuration Description Language (CDL) is the second variant of the description language defined by CDDLM-WG. It describes an XML-based language for declarative description of system configuration that consists of components (deployment objects) defined in the CDDLM Component Model. The Deployment API uses a deployment descriptor in CDL in order to manage deployment lifecycle of systems. The language provides ways to describe properties (names, values, and types) of components including value references so that data can be assigned dynamically with preserving specified data dependencies. A system is described as a hierarchical structure of components. The language also provides prototype-based template functionality (i.e., prototype references) so that the user can describe a system by referring to component descriptions given by component providers.

References

13. Specification, http://www.w3.org/Submission/WS-Policy/ 14. CDDLM Working Group, http://www.ogf.org/gf/group_info/view.php?group=cddlm- wg 15. Configuration Description, Deployment and Lifecycle Management SmartFrog- based Specification, http://www.ogf.org/documents/GFD.51.pdf 16. Configuration Description, Deployment and Lifecycle Management XML-based Specification, http://www.ogf.org/documents/GFD.85.pdf 17. Configuration Description, Deployment API Specification, http://www.ogf.org/documents/GFD.69.pdf NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 74 of 191

18. Configuration Description, Component Model Specification, http://www.ogf.org/documents/GFD.65.pdf

4.2.19 Content Selection for Device Independence (DISelect). W3C DISelect [1] specifies a syntax and processing model general purpose selection. Selection involves conditional processing of various parts of an XML information set according to the results of the evaluation of expressions. Using this mechanism some parts of the information set can be selected for further processing and others can be suppressed. The specification of the parts of the infoset affected and the expressions that govern processing is by means of XML-friendly syntax. This includes elements, attributes and XPath expressions. This document specifies how these components work together to provide general purpose selection. DISelect is part of the approach being developed for the provision of a markup language that supports creation of web sites that can be used from a wide variety of devices with a wide variety of characteristics. The following example illustrates suppression of the display of an image if the usable width of the display on a device is too small. The first and third paragraphs containing the text are always presented. However, the second paragraph containing the image is shown only if the dc:cssmq-width function indicates that the usable width of the device display is more than 200 pixels.

The flooding was quite extensive.

Many people were evacuated from their homes.

References

1. DISelect, http://www.w3.org/TR/2004/WD-cselection-20040611

4.2.20 Delivery Context: Client Interfaces (DCCI). W3C DCCI [1] defines platform and language neutral programming interfaces that provide Web applications access to a hierarchy of dynamic properties representing device capabilities, configurations, user preferences and environmental conditions. The term delivery context describes the set of characteristics of the device, the network, user preferences and any other aspects that apply to the interaction between a device and the Web. Some aspects of delivery context are static. For example, the color resolution of the display on a mobile device is fixed. Other aspects might be dynamic. Some devices have flip out covers that effectively provide a larger display when opened. Some devices can detect the orientation in which they are being used, and can rotate their output thereby changing the aspect ratio of the display. An increasing number of mobile devices can locate their position using the Global Positioning System [GPS]. Depending on the application in use, some or all of this information may be useful in tailoring the user experience.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 75 of 191

Generally, properties are considered highly dynamic if their values can change during a session with a web server. Location is a good example of a property that can be highly dynamic with values being transmitted through mechanisms such as GPS notifications. Other properties may have values that can change, but more slowly. In particular, there are properties that are usually constant during a session. User preferences often fall into this category. This second kind of less dynamic property is often associated with some kind of fixed, default value. For example, a device may be manufactured with a particular font as default. A user can subsequently change this default font. Although the value of the characteristic has been changed and may affect the rendering of web pages, the value is unlikely to be highly dynamic. DCCI can provide access to any part of the delivery context available at the device. This includes static properties, those that change infrequently and those that are highly dynamic, such as battery level or location.

References

1. Delivery Context: Client Interfaces (DCCI), http://www.w3.org/TR/2007/CR-DPF- 20071221/

4.2.21 Delivery Context Ontology. W3C The Delivery Context Ontology [1] provides a formal model of the characteristics of the environment in which devices interact with the Web. The delivery context includes the characteristics of the device, the software used to access the Web and the network providing the connection among others. The delivery context is an important source of information that can be used to adapt materials from the Web to make them useable on a wide range of different devices with different capabilities. The ontology is formally specified in the Web Ontology Language [OWL]. This document describes the ontology and gives details of each property that it contains.

References

1. Delivery Context Ontology, http://www.w3.org/2007/uwa/editors- drafts/DeliveryContextOntology/2007-10-31/DCOntology.html

4.2.22 Desktop and mobile Architecture for System Hardware (DASH). DMTF Desktop and mobile systems management in today‘s enterprise environments is comprised of a disparate set of tools and applications which administrators can use to manage the multitude of networked desktop and mobile computers. In many cases, these tools are specialized and adapted to each individual environment, installation and product in the environment.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 76 of 191

Currently, the CIM Schema provides a feature-rich systems management environment. In its cur-rent form, it also places a burden on those vendors attempting to implement the CIM Schema and CIM-XML Protocol to support systems hardware management. This has resulted in lack of inter-operability and acceptance of solutions in the desktop and mobile systems hardware management solution space, particularly in the out-of-band and out-of service cases. In addition, the resulting Out-of-Band and Out-of-Service management solutions are different from the operating system's representation and management of the system. The Desktop and mobile Architecture for System Hardware (DASH)[1][2] Management Initiative from Distributed Management Task Force (DMTF) supports a suite of specifications which include architectural semantics, industry standard proto-cols and a set of profiles to standardize the management of desktop and mobile systems inde-pendent of machine state, operating platform or vendor. By creating industry standard protocols, interoperability is facilitated over the network and the syntax and semantics of those protocols are facilitated to be interoperable by products which adhere to those standards. Because it is based on the CIM Schema, the DASH Management Initiative (hereafter referred to as DASH) leverages the richness of CIM. By creating industry standard profiles, the richness of the CIM Schema can be applied in a consistent manner by all vendors. Extra emphasis has been placed in the development of DASH to enable lightweight implementations which are architecturally consistent. This has been done to enable a full spectrum of implementations without sacrificing the richness of the CIM heritage. This includes software-only solutions and small footprint firmware solutions. Emphasis has been placed on ensuring that these implementations will be interoperable, independent of implementation, CPU architecture, chipset solutions, vendor or operating environment. The overall DASH Architecture Model is shown in the next Figure.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 77 of 191

The terms used in this model are defined in the following sections. The dotted lines in this model indicate the protocols and transports that are externally visible. These are the communication interfaces between the Manageability Access Point (MAP) and the Client and represent data that flows across the network, for example. The solid lines indicate semantically visible interfaces. The packets, transports, and interfaces are not externally visible but the fact that they are separate components with their own semantics is visible. The functional implications which are noticeable by the Client need to be accounted for in order to have a complete model. An open-source implementation can be found here[3].

References

1. Home, http://www.dmtf.org/standards/mgmt/dash/ 2. Specification v1.1, http://www.dmtf.org/standards/published_documents/DSP0232_1.1.0.pdf 3. DASH Implementation, http://sourceforge.net/projects/dash-management

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 78 of 191

4.2.23 Device Description Repository (DDR) Simple API. W3C The need for Device Descriptions (information about the Properties of various Aspects of the Delivery Context) is not confined to the mobile Delivery Context. It is common practice for Web sites to detect the type of user agent ("browser sniffing") to determine possibly small but important differences between various desktop Web browsers and adapt content to accommodate those differences. In the desktop Delivery Context, the number of different Properties that affect the usability of the resulting content is limited, when compared with the number of different Properties of the mobile Delivery Context that affect usability of content. Examples of such Properties include screen dimensions, input methods, memory and CPU constraints. There are also differences between mobile Web browsers including markup languages supported and image formats supported. Historically, it has been difficult or impossible to upgrade Web browser software on mobile devices or to add support for features not present in the Web browser originally shipped with the device. This leads to a very wide variation not only of hardware related features but also of features determined by software and firmware. Although the need for content adaptation is a general requirement of the Web as a whole, the need for a more systematic approach is seen as being most urgent in the mobile Delivery Context. As a result, Device Description Repositories (DDRs) have become essential components of development of content targeted at the mobile Delivery Context. A number of proprietary implementations exist, each with its own API and method of describing the Properties of the Delivery Context. The Device Descriptions Working Group (DDWG), a Working Group of the W3C Mobile Web Initiative, was chartered to create a single API [1] though which Property values can be retrieved, and a "Core" Vocabulary [Core Vocabulary] that identifies a small number of such Properties.

References

1. Device Description Repository Simple API, http://www.w3.org/TR/2008/WD-DDR- Simple-API-20080404

4.2.24 Device Independent Authoring Language (DIAL). W3C The purpose of the Device Independent Authoring Language (DIAL) [1] is to provide a markup language for the filtering and presentation of Web page content available across different delivery contexts. This will facilitate an optimal user experience following adaptation of the DIAL instance document. DIAL is a language profile based on existing W3C XML vocabularies and CSS modules (mainly in modules form XHTML2). These provide standard mechanisms for representing

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 79 of 191

Web page structure, presentation and form interaction. The DIAL also makes use of the DISelect metadata vocabulary [2] for overcoming the authoring challenges inherent in authoring for multiple delivery contexts. (see DISelect)

References

1. Device Independent Authoring Language (DIAL), http://www.w3.org/TR/2007/WD- dial-20070727 2. DISelect, http://www.w3.org/TR/2005/WD-cselection-20050502

4.2.25 Digital Signature Service (DSS) Core Protocols. OASIS This OASIS specification[1] describes two XML-based request/response protocols – a signing protocol and a verifying protocol. Through these protocols a client can send documents (or document hashes) to a server and receive back a signature on the documents; or send documents (or document hashes) and a signature to a server, and receive back an answer on whether the signature verifies the documents. These operations could be useful in a variety of contexts – for example, they could allow clients to access a single corporate key for signing press releases, with centralized access control, auditing, and archiving of signature requests. They could also allow clients to create and verify signatures without needing complex client software and configuration.

References

1. Home, http://www.oasis-open.org/committees/dss

4.2.26 Distributed Resource Management Application API Specification (1.0). OGF/GGF The Distributed Resource Management Application API (DRMAA) Working Group [1] also belongs to the Open Grid Forum (OGF) (Previously known as Global Grid Forum (GGF)). This specification [2] describes the Distributed Resource Management Application API (DRMAA), which provides a generalized API to distributed resource management systems (DRMSs) in order to facilitate integration of application programs. The specification encompasses the high-level functionality necessary for an application to consign a job to a DRMS, including common operations on jobs like termination or suspension. The scope of DRMAA is limited to job submission, job monitoring and control, and retrieval of the finished job status. DRMAA provides application developers and distributed resource management builders with a programming model that enables the development of distributed applications tightly coupled to an underlying DRMS. For deployers of such distributed applications, DRMAA preserves flexibility and choice in system design. The objective is to facilitate the direct interfacing of applications to DRMS for application builders, portal builders, and independent software vendors. NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 80 of 191

There are several implementations: Sun Grid Engine DRMAA[3], Condor[4], Gridway[5], XGRID[6], PBS[7], Platform Computing Corp[8], UNICORE[9]...

References

1. Wiki, http://www.drmaa.net/w/ 2. Specification, http://www.ogf.org/documents/GFD.22.pdf 3. Sun Grid Engine DRMAA, http://www.gridforum.org/Public_Comment_Docs/Documents/Mar- 2007/sge_DRMAA_experience_report.pdf 4. Condor, http://www.ggf.net/Public_Comment_Docs/Documents/Mar- 2007/condor_DRMAA_experience_report.pdf 5. Gridway, http://en.wikipedia.org/wiki/GridWay 6. XGRID, http://www.apple.com/macosx/features/xgrid/ 7. PBS, http://en.wikipedia.org/wiki/Portable_Batch_System 8. Platform Computing Corp, http://www.platform.com/Newsroom/Press.Releases/2007/DRMAA.Standards.htm 9. UNICORE, http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=1654618&isn umber=34688

4.2.27 ECperf Benchmark Specification. JCP ECperf[1][2] was an early benchmark defined by the Java Community Process to measure the performance and scalability of J(2)EE application servers. It simulates a supply chain management (SCM) process which includes four application domains: corporative, order entry, supply chain management and manufacturing. The benchmark measures the throughput of the SCM application varying the load injected in the system under test (SUT). Note: ECPerf is now obsolete. It was overcome by SPECjAppServer.

References

1. Home, http://jcp.org/aboutJava/communityprocess/final/jsr004/index.html 2. Implementation, http://java.sun.com/developer/earlyAccess/j2ee/ecperf/download.html

4.2.28 Electronic Business Extensible Markup Language (ebXML). OASIS The ebXML [1] infrastructure is composed of the following major elements:

Messaging Services (ebMS)[2] Business Registry Services (ebXML RR)[3] Trading Partner Information (TPP)[4] Business Process Specification Schema (BPSS)[5] ebXML Core Components (ebXML CC)[5] NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 81 of 191

These elements together represented the so-called ebXML framework which is brought together through the ebXML architecture specification which in essence is a model/architecture for B2B Interoperability. ebXML is strongly promoted by OASIS. Whilst the elements of the ebXML framework are best interlinked each can also be used in isolation with other technologies – for example, Core Components can form non-XML documents and Messaging Services can be used to deliver any SOAP transportable payload. ebXML is composed of four infrastructure components and several other efforts such as ones focused on document creation, business process definition, etc. The infrastructure components are orthogonal in design. They may be used together or separately in implementing an infrastructure. Messaging Service Messaging Service - This provides a standard way to exchange business messages between organisations. It provides a means to exchange a payload (which may or may not be an XML business document) reliably and securely. It also provides means to route a payload to the appropriate internal application once an organisation has received it. The messaging service specification does not dictate any particular file transport mechanism (such as SMTP, HTTP, or FTP) or network for actually exchanging the data, but is instead protocol neutral. ebMS is an extension of SOAP (with attachments) mainly in the areas of reliability and security. Registry and Repository Services Registry (ebXML RR) - The registry is a database of items that support ‗doing business electronically‘. Technically speaking, a registry stores information about items that actually reside in a repository. The two together can be thought of as a database. Items in the repository are created, updated, or deleted through requests made to the registry. The particular implementation of the registry/repository database is not specified, but only how other applications interact with the registry (registry services interfaces) and the minimum information model (the types of information that is stored about registry items) that the registry must support. Examples of items in the registry might be XML schemas of business documents, definitions of library components for business process modelling, and trading partner agreements. The ebXML RR specification are in two parts – one for the registry model itself and one for the service interface. Trading Partner Information Trading Partner Information - The Collaboration Protocol Profile (or CPP) provides the definition, via an XML document, that specifies the details of how an organisation is able to conduct business electronically. It specifies such aspects as how to locate contact and other information about the organisation, the types of network and file transport protocols it uses, network addresses, security implementations, and how it does business (a reference to a Business Process Specification). The Collaboration Protocol Agreement (or CPA) specifies the details of how two organisations have agreed to conduct business electronically. It is formed by combining the CPPs of the two organisations. A CPA can be used by a software application to configure the technical details of conducting business electronically with another organisation. The CPA/CPP specification discusses the general tasks and issues in creating a CPA from two CPPs. However, for various reasons it doesn't specify an actual algorithm for doing it. NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 82 of 191

Business Process Specification Schema (BPSS) Business Process Specification Schema (BPSS) - The Specification Schema provides the definition (in the form of an XML DTD) of an XML document that describes how an organisation conducts its business. While the CPA/CPP deals with the technical aspects of how to conduct business electronically, the Specification Schema deals with the actual business process. It identifies such things as the overall business process, the roles, transactions, identification of the business documents used (the DTDs or schemas), document flow, legal aspects, security aspects, business level acknowledgements, and status. A Specification Schema can be used by a software application to configure the business details of conducting business electronically with another organisation. ebXML Core Components (CC) Core Components – Core Components of ebXML are used to express semantics. They are a set of pre-defined elements that can be combined in order to define business documents such as orders or invoices. Those Core Components allow users to create standard compliant documents that are consisting of small fragments with a precise meaning. In the UN/CEFACT Forum, a large group of messaging experts from various industries continuously expands the library of Core Components and related Business Information Entities.

References

1. ebXML Website (OASIS), http://www.ebxml.org 2. ebXML Message Service Specification, http://www.oasis- open.org/committees/ebxml-msg/documents/ebMS_v2_0.pdf 3. ebXML RR, http://www.oasis- open.org/committees/regrep/documents/2.0/specs/ebrs.pdf 4. ebXML Technical Architecture Specification (OASIS), http://www.ebxml.org/specdrafts/ebXML_TA_v0.9.pdf 5. ebXML Business Process Specification Schema, http://www.ebxml.org/specs/ebBPSS.pdf

4.2.29 EJB Security J2EE's EJB technology defines a set of processes, starting from the application development and following through the administration of of secure distributed-transaction business programs EJB security[1] is intended to be managed by the EJB container and driven by declarative security policy. Externalizing security policy from the EJB code provides greater opportunity for deployment flexibility, code reuse, and portability between container implementations. EJB delegates security issues to those EJB roles having greater familiarity with the security features of the EJB container and deployment environment. In practice, the effective security policy is defined by the Deployer and the System Administrator, and the EJB container is responsible for enforcement of the policy. Authentication is the responsibility of the EJB container and can be implemented using JAAS. Much of EJB security is concerned with authorization. EJB authorization is based on a simplified CORBA security model. NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 83 of 191

References

1. Reference, http://www.ubookcase.com/book/Addison.Wesley/Enterprise.Java.2.Security.Buildin g.Secure.and.Robust.J2EE.Applications/

4.2.30 Enterprise Class Syslog Management Syslog is a valuable mechanism to proactively capture chronic issues affecting the network. It can identify many more exceptions and network degradation warnings than other forms of telemetry such as SNMP traps and therefore must by utilized by support organizations. This wiki[1] presents Syslog more in deep.

References

1. Wiki, http://nms.gdd.net/index.php/Enterprise_Class_Syslog_Management

4.2.31 eXtensible Access Control Markup Language (XACML). OASIS XACML[1] is an OASIS specification that expects to address fine grained control of authorized activities, the effect of characteristics of the access requestor, the protocol over which the request is made, authorization based on classes of activities, and content introspection (i.e. authorization based on both the requestor and potentially attribute values within the target where the values of the attributes may not be known to the policy writer). XACML is also expected to suggest a policy authorization model to guide implementers of the authorization mechanism. The langage can represent the functionalities of most security representation mechanisms and has standard extension points for defining new functions, data types, policy combination logic and so on.

References

1. Home, http://www.oasis-open.org/committees/xacml

4.2.32 eXtensible Access Method (XAM). SNIA The XAM specification[1] from the Storage Networking Industry Association (SNIA) aims to define a standard access method (API) between "Consumers" (application and management software) and "Providers" (storage systems) to manage fixed content reference information storage services. XAM includes metadata definitions to accompany data to achieve application interoperability, storage transparency, and automation for ILM- based practices, long term records retention, and information security. XAM will be expanded over time to include other data types as well as support additional implementations based on the XAM API to XAM conformant storage systems.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 84 of 191

XAM aims to provide:

Interoperability: Applications can work with any XAM conformant storage system; information can be migrated and shared Compliance: Integrated record retention and disposition metadata ILM Practices: Framework for classification, policy, and implementation Migration: Ability to automate migration process to maintain long-term readability Discovery: Application-independent structured discovery avoids application obsolescence

The specification provides an architecture (See the Figure below) that allows XAM- enabled applications to interface with XAM-compliant vendor devices. The goal of this architecture is to allow applications to take advantage of the XAM Application Programming Interface (API) to store and retrieve reference information in a vendor- independent and location-independent manner. A primary requirement of the XAM architecture is the ability to support access to multiple vendors‘ XAM Storage Systems and multiple versions of the same vendor‘s XAM Storage System. That is, different versions of the XAM specification must be able to access the same XAM Storage System, or, the same version of the XAM specification must be able to access different versions of a XAM Storage System. This architecture also allows multiple applications to access the same XAM Storage System. The application binds to one of the XAM API language bindings supplied by the XAM Library. XAM standardizes two bindings: a C language binding[2] and a Java language binding[3]. The XAM architecture provides a mechanism for XAM Storage System vendors to create Vendor Interface Modules (VIMs) that act as bridges between the standard XAM APIs and the vendor‘s storage systems. How the VIMs connect to their respective devices (for example, TCP/IP, SCSI, or a file system) is transparent to the XAM API and the application. The connection is completely encapsulated by the VIM; the applications should be unaware of the VIM‘s existence and functionality.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 85 of 191

When the application requests access to a specific XSystem (See Figure above), the XAM Library discovers the appropriate VIM to use to dispatch the request to the XSystem. Once a XAM session has been created to connect the application to the XSystem, the XAM Library dispatches additional application requests to the XSystem using the selected VIM. The VIM then communicates with the XAM Storage System (not shown), executes the request, and returns the response to the XAM Library, which in turn sends it to the application. The application can also use convenience interfaces in the XAM Toolkit. XAM allows the VIM to act as a cache so that it can optimize communication between the VIM and the XAM Storage System. Note that the VIM may be operating in the same context as the application, and thus is potentially subject to malicious attacks. To ensure data security and integrity in the XAM Storage System, the XAM architecture requires all data security and integrity checks to be performed when the application‘s data is committed to persistent storage. XAM also strongly recommends that these checks occur at the time the application first modifies the data, so that an application can more directly correlate any issues to the specific operation that caused the issue. The XAM specification is currently under development and only a draft of the specification has been officially published[4].

References

1. Home, http://www.snia.org/tech_activities/standards/curr_standards/xam/ 2. C Binding, http://www.snia.org/tech_activities/publicreview/XAM_C_API_v1.0.pdf 3. Java Binding, http://www.snia.org/tech_activities/publicreview/XAM_Java_API_v1.0.pdf 4. Specification (Draft), http://www.snia.org/tech_activities/publicreview/ NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 86 of 191

4.2.33 FCAPS. ITU The comprehensive management of an organization's information technology (IT) infrastructure is a fundamental requirement. Employees and customers rely on IT services where availability and performance are mandated, and problems can be quickly identified and resolved. Mean time to repair (MTTR) must be as short as possible to avoid system downtimes where a loss of revenue or lives is possible. FCAPS[1] is a model and a framework for network management defined by the International Telecommunications Union (ITU). FCAPS is an acronym for Fault, Configuration, Accounting, Performance, Security which are the management categories into which the ISO model defines network management tasks. In non-billing organizations Accounting is sometimes replaced with Administration. The interactions among the FCAPS functions are depicted in the following Figure:

Fault management is the domain where network problems are discovered and corrected. Steps are then taken to prevent them from occurring or recurring. By doing so, the network remains operational and downtime is minimized.

Configuration management is where daily operations are monitored and controlled. All hardware and programming changes are coordinated. In addition, new programs, new equipment, modification of existing systems and the removal of obsolete systems and programs are also coordinated.

Accounting management is devoted to determining how to optimally distribute resources among enterprise subscribers. This helps to minimize the cost of operations by making the most effective use of the systems available. This level is also responsible for ensuring the appropriate billing of users.

Performance management is involved in managing the overall performance of the enterprise network. Potential problems are identified, throughput is maximized and bottlenecks are identified. Improvements that will yield the greatest enhancement to overall performance are identified.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 87 of 191

Security management is responsible for protecting the network from unauthorized users and physical and electronic sabotage. Security management is responsible for user authentication and authorization. It also maintains the confidentiality of user information.

References

1. Specification, http://www.itu.int/rec/T-REC-M.3400/en

4.2.34 Formalism for visual security protocol modelling The current Model Driven Architecture does not have security-protocol-specific modelling features. GSPML is a visual modelling formalism that is compositional, comprehensive, laconic, and lucid. It is well-defined through its hyper-graph grammar and SOS. There is no other visual modelling technique that satisfies all of these criteria. GSPML can model other forms of security protocols in addition to cryptographic protocols. Additional information can be found in [1].

References

1. Journal of Visual Languages and Computing 19 (2008) 153–181 ―A formalism for visual security protocol modelling (J. McDermott, G. Allwein)

4.2.35 HTML 5. W3C XHTML2 [XHTML2] defines a new HTML vocabulary with better features for hyperlinks, multimedia content, annotating document edits, rich metadata, declarative interactive forms, and describing the semantics of human literary works such as poems and scientific papers. However, it lacks elements to express the semantics of many of the non-document types of content often seen on the Web. For instance, forum sites, auction sites, search engines, online shops, and the like, do not fit the document metaphor well, and are not covered by XHTML2. HTML 5[1] aims to extend HTML so that it is also suitable in these contexts. XHTML2 and this specification use different namespaces and therefore can both be implemented in the same XML processor.

References 1. HTML 5, http://www.w3.org/TR/2008/WD-html5-20080122

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 88 of 191

4.2.36 iSCSI Management API (IMA). SNIA IMA[1] specification from the Storage Networking Industry Association (SNIA), defines a standard interface that applications can use to perform iSCSI management independent of the vendor of the iSCSI HBA. The IMA was designed to be implemented using a combination of a library and plugins. An IMA library provides the interface that applications use to perform iSCSI management. Among other things, the library is responsible for loading plugins and dispatching requests from a management application to the appropriate plugin(s). Plugins are provided by iSCSI HBA vendors to manage their hardware. Typically, a plugin will take a request in the generic format provided by the library and then translate that request into a vendor specific format and forward the request onto the vendor‘s device driver. In practice, a plugin may use a DLL or shared object library to communicate with the device driver. Also, it may communicate with multiple device drivers. Ultimately, the method a plugin uses to accomplish its work is entirely vendor specific. At the time of writing this summary, the IMA standard is still under development, and no official documentation has been released. There is also an incompleted web page[2] hosted by the Sourceforge community that planned to offer an open source reference implementation.

References

1. Home, http://www.snia.org/tech_activities/standards/curr_standards/ima 2. Reference Implementation, http://www.snia.org/tech_activities/standards/curr_standards/ima

4.2.37 Information Technology Infrastructure Library (ITIL). OGC The IT Infrastructure Library (ITIL)[1][2] refers to a set of comprehensive, consistent and coherent codes of best practice for IT Service Management. It comprises a library developed by the Central Computer & Telecommunications Agency (CCTA) in the United Kingdom. Since April 2001 the CCTA is renamed into OGC (Office of Government Commerce). The library describes a number of related processes. ITIL was developed in the late 1980's in response to the recognition that organizations were becoming increasingly dependent on Information Systems (IS). The objective of the OGC in developing ITIL is to promote business effectiveness in the use of IS due to increasing organizational demands to reduce costs while maintaining or improving IT services. The ITIL concepts for best practices, through the involvement of leading industry experts, consultants and practitioners remain the only holistic, non-proprietary best practice framework available. As a result, it has quickly become the global benchmark by which organizations measure the quality of IT service management. Each described process in the Infrastructure Library covers a specific part of IT Service Management and its relationship to other processes. Each book can be read, and the process implemented, independently of the others. The overall provision of IT services, NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 89 of 191

however, can be optimized by considering each process as part of the whole, such that the whole is greater than the sum of its parts. This holistic approach suggests that organizations are likely to gain the most benefit from implementing all processes rather than some processes discretely. The most popular ITIL processes are contained in the two sets representing key elements of IT Service Management. The Service Support and Service Delivery sets describe the processes that any IT service provider must address to enhance the provision of quality IT services for its customers. Many organizations have embraced the ITIL concept because it offers a systematic and professional approach to the management of IT service provision. There are many benefits to be reaped by adopting the guidance provided by ITIL. Such benefits include but are not limited to:

Improved customer satisfaction Reduced cost in developing practices and procedures Better communication flows between IT staff and customers Greater productivity and use of skills and experience

ITIL provides IT professionals with the knowledge and resources they need to run and maintain an effective and efficient IT Infrastructure that meets the needs of their clients while keeping costs at a minimum. The 'library' itself continues evolving. ITIL v3 is the current release. This comprises five distinct volumes:

ITIL Service Strategy - It helps focus upon understanding, and upon translating business strategy into IT strategy, as well as selection of the best practices for the particular industry in question. The following topics are covered by this volume: o Strategy and value planning o Roles / responsibilities o Planning and implementing service strategies o Business planning and IT strategy linkage o Challenges, risks and critical success factors. ITIL Service Design - This volume provides guidance on the creation and maintenance of IT policies and architectures for the design of IT service solutions. Included are the following topics: o The service lifecycle o Roles and responsibilities o Service design objectives and elements o Selecting the appropriate model o Cost model o Benefit and risk analysis o Implementation o Measurement / control o CSF's and risks o This also embraces outsourcing, in-sourcing and co-sourcing.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 90 of 191

ITIL Service Transition - Fundamentally, it covers how to create a transition strategy from service design and transfer it to the production (business) environment. It includes the following topics: o Managing change (organizational and cultural) o Knowledge management o Risk analysis o The principles of service transition o Lifecycle stages o Methods, practices and tools o Measurement and control o Other best practices ITIL Service Operation - It embraces the familiar basics of how to manage services in the production environment, including day to day issues and fire fighting. The following topics are included: o Principles and lifecycle stages o Process fundamentals o Application management o Infrastructure management o Operations management o CSF's and risks o Control processes and functions ITIL Continual Service Improvement - It basically describes how to improve a service after it is deployed. It includes the following topics: o The drivers for improvement o The principles of CSI o Roles and responsibilities o The benefits o Implementation o Methods, practices and tools o Other best practices

References

1. Home, http://www.itil-officialsite.com/home/home.asp 2. Open Guide, http://www.itlibrary.org/

4.2.38 J2EE Activity Service for Extended Transactions. JCP An increasingly large number of distributed applications are constructed by composing existing applications. The resulting applications can be very complex in structure, and with complex relationships between their constituent applications. Furthermore, the execution of such an application may take a long time to complete, and may contain long periods of inactivity, often due to the constituent applications requiring user interactions. Therefore, new functionality is required for supporting flexible ways of composing an application using transactions, with the support for enabling the application to possess NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 91 of 191

some or all ACID properties. Such support should include facilities for supporting business rules, programming rules, and data usage patterns. Long-running applications and activities can be structured as many independent, short-duration top-level transactions, to form a logical long-running transaction. In the event of failures, to obtain reliable execution semantics for the entire long-running transaction may require compensation transactions which can perform forward or backward recovery. The J2EE Activity Service defines a framework on which extended models of units of work (called activities) can be constructed. An extended activity model might simply provide a means for grouping a related set of tasks that have no transactional properties or it may provide services for a long-running business activity that consists of a number of short- duration ACID transactions. This provides powerful structuring mechanisms for workflow engines, component management middleware (EJB containers...) and other systems that allow creating implementations of advanced transaction models. The Activity Service provides the notion of activity, an abstract unit of work whose precise nature needs to be defined by applications or users of the service. Whatever work an activity does, the result of a completed activity is its outcome, which can be used to determine subsequent flow of control to other activities. Activities may be nested, creating more specific scopes. Moreover, activities can run over long periods of time and can be suspended and resumed later. Finally, activities can also be transactional, using JTA transactions, though they don‘t have to use the native application server transactions at all. The following is an example of a composite set of activities (both, transactional and non-transactional):

The dotted ellipses represent activity boundaries, whereas the solid ellipses are transaction boundaries. Activity A1 contains two nested transactions, while activities A2, A4 and A5 contain no transaction. Activity A3 is more involved; it contains a transaction, which again contains one activity (A3') that nests a transaction. Activities A1 and A2 are sequential, while A3 and A4 are executed in parallel. The J2EE Activity Service consists of two main components: the activity service itself and one or more high level services (HLSs). The architecture of the J2EE Activity Service is illustrated in the following figure:

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 92 of 191

HLSs are defined on top of the activity service and represent the concrete advanced transaction models. Applications use a HLS to demarcate activities, which produce an outcome. In order to implement a transaction model using the J2EEAS developers must provide the demarcation points (called signals), the actions to respond to the signals, the outcomes and the state transitions (signalsets). References

1. JCP J2EE Activity Service for Extended Transactions, http://jcp.org/en/jsr/detail?id=95

Other Links

JASS: Open-source Activity Service Implementation, http://jass.ow2.org/

Notes

The J2EE Activity Service was not finally included as a standard part of the J(2)EE specification and thus the most part of J(2)EE-certified application servers do not include an implementation of it.

4.2.39 J2EE APIs for Continuous Availability. JCP This specification was included in the Java Community Process (JCP)[1]. While the J2EE platform and its programming model were designed to support the development and deployment of continuous-availability applications, the J2EE platform does not currently provide API support for certain functions required by these applications. Therefore, the vendors of J2EE platforms either do not support these functions in their products, or NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 93 of 191

support them with vendor-proprietary APIs. The goal of this JSR was to try to standardize the APIs for some of the functions that are essential to continuous-availability applications. Notes: No results have published since the creation of the JCP committee in 2001.

References

1. JCP Home, http://jcp.org/en/jsr/detail?id=117

4.2.40 Java API for XML Transactions (JAXTX). JCP It was previously known as XML Transactioning API for Java (JAXTX). It was a trial to define a set of APIs that allow the management (creation and lifetime) and exchange of transaction information between participating parties in a loosely coupled environment. The parties would use SOAP and XML document exchange to conduct business transactions. If these transactions are to be conducted in an ACID transaction manner, then information (e.g., the transaction context) would need to accompany these XML documents and be managed appropriately. The objective was not to define either XML messaging standards or XML schemas for particular tasks. These networking and formatting standards belong in networking standards bodies such as OASIS or IETF. Instead the specification aimed to define standard Java APIs to allow convenient access from Java to emerging XML messaging standards making easier to isolate application programmers and application servers from the underlying transaction manager implementations (BTP, WS-TX, and Activity Service). JAXTX proposed also a closer relationship to the BTP and Activity Service specifications. The JAXTX JCP committee worked joint with the Business Transactions Protocol committee in OASIS and the JCP community (JSR 95) to provide extended non-ACID transactions to users that will enable applications to run business transactions that span many organisations, last for hours, days, and weeks, and yet still retain some of the fault- tolerant and consistency aspects of traditional ACID transactions. References

1. JCP Java API for XML Transactions, http://jcp.org/en/jsr/detail?id=156

Other links

JBoss Transactions, http://www.jboss.org/jbosstm/resources/product_overview/wst.html: this product seems to be based on the JAXTX API.

Notes

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 94 of 191

Since BTP and Activity Service are hardly used, JAXTX has gradually lost interest inside the community. It seems that no official public documentation was released for this JCP specification.

4.2.41 Java Authentication and Authorization Service (JAAS). SUN JAAS[1] is a SUN specification that can be used to provide user management and permissioning. The authorization service allows to "log in" a user and specify which "identities" (groups or multiple users pulled in from different systems) a logged in user has. The authentication system allows to specify which permissions a user's identity has and then check for that permission before executing any Java code. Additional information can be found in [2]. References

1. Home, http://java.sun.com/products/archive/jaas/ 2. ―Applying security policies through agent roles: A JAAS based approach‖ Science of Computer Programming 59 (2006) 127–146 (Giacomo Cabri, Luca Ferrari, Letizia Leonardi)

4.2.42 Java Data Object Secure Specification The Java Data Objects (JDO) specification[1] designed as a lightweight persistence approach doesn‘t provide any declarative security capabilities. JDOSecure, introduces a role-based permission system to the JDO persistence layer, which is based on the Java Authentication and Authorization Service (JAAS). It comprises a management solution for users, roles, an permissions and allows storing the authentication and authorization information in any arbitrary JDO resource. Furthermore, a Java-based administration utility with a graphical user interface simplifies the maintenance of security privileges and permissions. Additional information can be found in [2].

References

1. Home, http://projekt-jdo.uni-mannheim.de/jdosecure/ 2. Science of Computer Programming 70 (2008) 208–220 December 2006 ―Enabling declarative security through the use of Java Data Objects‖ (Matthias Merz).

4.2.43 Java Management Extensions (JMX). JCP JMX [1][2][3] is a Java specification included in the Java Community Process (JCP). It is an specification that provides a management architecture, APIs and services for building Web-based, distributed, dynamic and modular solutions to manage Java enabled resources. Those resources are represented by objects called MBeans (for Managed Bean or component).

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 95 of 191

Some benefits of JMX Technology are:

The JMX technology enables Java developers to encapsulate resources as Java objects and expose them as management resources in a distributed environment. The JMX specification lists the following benefits to using it to build a management infrastructure: Manages Java applications and services without heavy investment: JMX architecture relies on a core managed object server that acts as a management agent and can run on most Java-enabled devices. Java applications can be managed with little impact on their design. Provides a scalable management architecture: A JMX agent service is independent and can be plugged into the management agent. The component-based approach enables JMX solutions to scale from small footprint devices to large telecommunications switches. Can leverage future management concepts: It can implement flexible and dynamic management solutions. It can leverage emerging technologies; for example JMX solutions can use lookup and discovery services such as Jini network technology, UPnP, and Service Location Protocol (SLP). Focuses on management: While JMX technology provides a number of services designed to fit into a distributed environment, its APIs are focused on providing functionality for managing networks, systems, applications, and services.

The JMX architecture is depicted in the following figure:

The Probe level: Also called the Instrumentation level. Contains the probes (MBeans) instrumenting the resources. It also defines a generic notification model based on the Java event model. It lets developers build proactive management solutions. The Agent level: It acts as an intermediary between the MBean and the applications. It provides a specification for implementing agents, which control the

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 96 of 191

resources and make them available to remote management applications. Agents are usually located on the same machine as the resources they manage, but this is not a requirement. The JMX agent consists of an MBean server and a set of services for handling MBeans. Managers access an agent's MBeans and use the provided services through a protocol adaptor or connector. The MBean server is the core of JMX. The Remote Management level: enables remote applications to access the MBeanServer through Connectors and Adaptors. A connector provides full remote access to the MBeanServer API using various communication frameworks (RMI, IIOP, JMS, WS-* ...), while an adaptor adapts the API to another protocol (SNMP, ...) or to Web-based GUI (HTML/HTTP, WML/HTTP, ...).

Typical uses of the JMX technology include:

Consulting and changing application configuration Collecting statistics about application behavior and making the statistics available Notification of state changes and erroneous conditions

JMX is supported by most J(2)EE application servers such as JBoss, JOnAS, WebSphere Application Server, WebLogic, Oracle Application Server 10g and Sun Java System Application Server.

References

1. JMX JCP Home, http://jcp.org/en/jsr/detail?id=3 2. JCP Home Version 2.0 (Current Version), http://jcp.org/en/jsr/detail?id=255 3. Web Services Connector for Java Management Extensions (JMX) Agents, http://jcp.org/en/jsr/detail?id=262

4.2.44 Java Secure Socket Extension (JSSE). SUN SUN's JSSE[1] enables secure Internet communications. It provides a framework and an implementation for a Java version of the SSL and TLS protocols and includes functionality for data encryption, server authentication, message integrity, and optional client authentication. Using JSSE, developers can provide for the secure passage of data between a client and a server running any application protocol, such as Hypertext Transfer Protocol (HTTP), Telnet, or FTP, over TCP/IP.

References

1. Reference Guide, http://java.sun.com/j2se/1.5.0/docs/guide/security/jsse/JSSERefGuide.html

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 97 of 191

4.2.45 Java Security Manager. SUN The SUN's security manager[1] is a Java class that allows applications to implement a security policy. The job of the Security Manager is to keep track of who is allowed to do which dangerous operations. It allows an application to determine, before performing a possibly unsafe or sensitive operation, what the operation is and whether it is being attempted in a security context that allows the operation to be performed. The application can allow or disallow the operation. Additional information can be found in [2].

References

1. Home, http://java.sun.com/j2se/1.4.2/docs/api/java/lang/SecurityManager.html 2. ―Performance of the Java security manager‖ Computers & Security (2005) 24, 192- 207 (Almut Herzog, Nahid Shahmehri)

4.2.46 Java Transaction API (JTA). JCP The Java Transaction API (JTA) specification was developed by Sun Microsystems in cooperation with leading industry partners in the transaction processing and database system arena. JTA specifies standard Java interfaces between a transaction manager and the parties involved in a distributed transaction system: the resource manager, the application server, and the transactional applications. The main purpose of JTA is to allow applications to demarcate transactions. In principle, it is the responsibility of the application developer to demarcate transaction boundaries using -among others- three basic methods defined in the JTA API: begin(), commit() and rollback(). This kind of transactions are called programmatic transactions. JTA is a standard part of the J(2)EE platform and every Enterprise JavaBeans (EJB) application server should also include a JTA implementation. Apart from using programmatic transactions, the application servers allow the developers to demarcate transactions avoiding the direct access to the JTA API. This prevents from including non- functional code related to transaction management in the business components. Rather, the application server makes the appropriate calls behind the scenes. These are called container-managed transactions. Several resources, such as databases, can participate in a transaction. JTA also provides the required methods to allow resource registration and notification. The following figure shows the environment of JTA:

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 98 of 191

References

1. Java Transaction API (JTA), http://jcp.org/en/jsr/detail?id=907

Other Links

Java Open Transaction Manager (JOTM), the transaction manager of the JOnAS application server, http://jotm.ow2.org/ JBossTransactions: the transaction manager of the JBoss application server, http://wiki.jboss.org/wiki/JBossTransactions

4.2.47 MSR Specification Language Typed MSR is a strongly typed specification language for security protocols, aiming to discover errors in their design. It is particularly suitable for privacy-preserving protocols because it features memory predicates, which enable it to faithfully encode systems consisting of a collection of coordinated subprotocols—a common characteristic of privacy- preserving protocols Additional information can be found in [1].

References

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 99 of 191

1. ―Specifying Privacy Preserving Protocols in Typed MSR‖ Computer Standards & Interfaces 27 (2005) 501–512 (Theodoros BalopoulosT, Stefanos Gritzalis, Sokratis K. Katsikas)

4.2.48 Message Trasmission Optimization Mechanism (MTOM). W3C Message Transmission Optimization Mechanism (MTOM) describes a mechanism for optimizing the transmission and/or connection format of a SOAP message by selectively re-encoding sections of the message exposing an XML Infoset (Information Set) to the SOAP application. MTOM also describes an Inclusion Mechanism that works in a binding-independent way, plus a specific binding for HTTP. MTOM is specialized on solving the "Attachments" problem in SOAP. The approach is to make the attached binary content logically inline with the SOAP document even if included separately. Non-XML data is processed in a similar way of SOAP with Attachments (SwA): the data is simply streamed as binary data in one of the MIME message parts. This preserves backward compatibility with SwA endpoints. The most remarkable feature of MTOM is the use of XOP:Include element: it allows the applications to process and describe simply looking at XML part. MTOM attachments can be also secured by using WS-Security. MTOM is focused on:

interoperability: it is a W3C recommendation since January 2005. All recent versions of WS-tooling provide support for MTOM. Further, MTOM messages are valid SwA messages, lowering the cost of supporting MTOM for existing SwA implementations. MTOM attachments are managed as described above; composability: the final result of an MTOM transfer is a SOAP envelope. This aspect assures that all higher-level Web service protocols operate as designed; efficiency: serializes the data for efficient transport over MIME Multipart messages.

References

1. W3C SOAP Message Transmission Optimization Mechanism (MTOM), http://www.w3.org/TR/soap12-mtom/

4.2.49 Multipath Management API (MMA). SNIA Open system platforms give applications access to physical devices by presenting a special set of file names that represent the devices. Although end users typically don't use these special device files, knowledgeable applications (file systems, databases, backup software) operate on these device files and provide familiar user interfaces to storage. The device files have a hierarchical organization, either by using files and directories or by naming conventions.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 100 of 191

This hierarchy of device files (sometimes called a device tree) provides an effective interface for simpler, desktop device configurations. Inside open systems kernels, the hierarchy is exploited to allow different drivers to operate on different parts of the device tree. When the OS discovers connected devices and builds the device tree, multiple paths to the same device may show up as separate device files in the device tree. Separate storage applications using device files that represent paths to the same device will overwrite each other‘s data. As storage products (typically disk arrays) strove for better reliability and performance, they added multipath support. A target device supporting multiple paths and attached hosts will nearly always have multiple ports. Each permutation of initiator port, target port, and logical unit is commonly referred to as a path. With no multipath support in place, the OS would see each path as separate logical units. The function of multipath drivers is then to create a virtual multipath device the aggregates all these path logical units. The Multipath Management API[1][2][3] from the Storage Networking Industry Association (SNIA), allows a management application to discover the multipath devices on the current system and to discover the associated local and device ports. An implementation of the API may optionally include active management (failover, load balancing, manual path overrides). The API uses an architecture that allows multiple MP drivers installed on a system to each provide plugins to a common library. The plugins can support multipath drivers bundled with an OS, or drivers associated with an HBA, target device, or volume manager. This API can be used by host-based management applications and will also be included in the SMI-S Host Discovered Resources Profile for enterprise-wide multipath discovery and management.

References

1. Home, 2. ANSI Specification, http://webstore.ansi.org/RecordDetail.aspx?sku=ANSI+INCITS+412-2006 3. Technical Position, http://www.snia.org/tech_activities/standards/curr_standards/mma/MMA_Technical _Position_v1.0.pdf

4.2.50 NETCONF. IETF NETCONF[1][2][3] is a network management protocol developed by the NETCONF working group in the Internet Engineering Task Force (IETF). The NETCONF protocol defines a simple mechanism through which a network device can be managed, configuration data information can be retrieved, and new configuration data can be uploaded and manipulated. The protocol allows the device to expose a full, formal application programming interface (API). Applications can use this straightforward API to send and receive full and partial configuration data sets. The NETCONF protocol uses a remote procedure call (RPC) paradigm (See Figure below). A client encodes an RPC in XML and sends it to a server using a secure, connection-oriented session. The server responds with a reply encoded in XML. The NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 101 of 191

contents of both the request and the response are fully described in XML DTDs or XML schemas, or both, allowing both parties to recognize the syntax constraints imposed on the exchange. A key aspect of NETCONF is that it allows the functionality of the management protocol to closely mirror the native functionality of the device. This reduces implementation costs and allows timely access to new features. In addition, applications can access both the syntactic and semantic content of the device's native user interface. NETCONF allows a client to discover the set of protocol extensions supported by a server. These "capabilities" permit the client to adjust its behaviour to take advantage of the features exposed by the device. The capability definitions can be easily extended in a non- centralized manner. Standard and non-standard capabilities can be defined with semantic and syntactic rigor. The NETCONF protocol is a building block in a system of automated configuration. XML is the lingua franca of interchange, providing a flexible but fully specified encoding mechanism for hierarchical content. NETCONF can be used in concert with XML-based transformation technologies, such as XSLT, to provide a system for automated generation of full and partial configurations. The system can query one or more databases for data about networking topologies, links, policies, customers, and services. This data can be transformed using one or more XSLT scripts from a task-oriented, vendor-independent data schema into a form that is specific to the vendor, product, operating system, and software release. The resulting data can be passed to the device using the NETCONF protocol.

References

1. Specification, http://tools.ietf.org/html/rfc4741 2. Netconf Charter, http://www.ietf.org/html.charters/netconf-charter.html 3. Netconf Wiki, http://www3.tools.ietf.org/wg/netconf/trac/wiki

4.2.51 Network Management Architectural Model A Network Management System (NMS) is a combination of hardware and software used to monitor and administer a network. This wiki[1] shows the major components that make up a comprehensive Network Management System and provides a high-level integration scenario.

References

1. Model, http://nms.gdd.net/index.php/Enterprise_NMS_Architectures

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 102 of 191

4.2.52 OMA-DPE. OMA Mobile applications and services are required to function in varying network environments with different users having devices with a wide range of capabilities. The device capabilities, and network conditions can vary dynamically and applications need to be able to respond to these changes accordingly. The capabilities of a device are determined by its hardware characteristics, user settings, and installed software components. These capabilities are dynamic in nature, meaning that they can change instantaneously or even during the course of a single data session. OMA‘s [UAProf] enabler allows the communication of static device properties to an Application Service Provider at the beginning of a data session. The goal of the Device Profiles Evolution (DPE)[1] enabler is to define an enhanced device profiles mechanism which allows a device to convey Dynamic Device Properties to an Application Service Provider in real time, thereby ensuring that the Application Service Provider can provide content best suited to the capabilities of the device at that time.

References

1. OMA Device Profiles Evolution 1.0, http://www.openmobilealliance.org/technical/release_program/docs/rd/oma-rd-dpe- v1_0-20070209-c.pdf

4.2.53 OWL-based Web Service Ontology (OWL-S). W3C OWL-S [1] (formerly DAML-S) is an OWL based Ontology, within the OWL-based framework of the Semantic Web, for describing Web services. OWL-S ontology is also sometime considered as a language for describing services, reflecting the fact that it provides a standard vocabulary that can be used together with the other aspects of the OWL description language to create service descriptions It will enable users and software agents to automatically discover, invoke, compose, and monitor Web resources offering services, under specified constraints. OWL-S supplies Web service providers with a core set of mark-up language constructs for describing the properties and capabilities of their web services in unambiguous, computer- interpretable form. It enables the following types of task:

Automatic web service discovery: it involves the automatic location of Web services that provide a particular service and that adhere to requested constraints. Automatic web service invocation: it involves the automatic execution of an identified Web service by a computer program or agent Automatic web service composition and interoperation: it involves the automatic selection, composition, and interoperation of Web services to perform some task, given a high-level description of an objective.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 103 of 191

The ontology consists of three sub ontologies, service profile, service grounding and the process model, which are tied together using a service ontology. The structure of the ontology of services is motivated by the need to provide three essential types of knowledge about a service.

Top level of the service ontology

Each of type of knowledge is characterized by the question it answers:

What does the service provide for prospective clients? The answer to this question is given in the "profile" which is used to advertise the service. To capture this perspective, each instance of the class Service presents a ServiceProfile. How is it used? The answer to this question is given in the "process model." This perspective is captured by the ServiceModel class. Instances of the class Service use the property describedBy to refer to the service's ServiceModel. How does one interact with it? The answer to this question is given in the "grounding." A grounding provides the needed details about transport protocols. Instances of the class Service have a supports property referring to a ServiceGrounding.

The service profile tells "what the service does", in a way that is suitable for a service- seeking agent (or matchmaking agent acting on behalf of a service-seeking agent) to determine whether the service meets its needs. This form of representation includes a description of what is accomplished by the service, limitations on service applicability and quality of service, and requirements that the service requester must satisfy to use the service successfully.

The service model tells a client how to use the service, by detailing the semantic content of requests, the conditions under which particular outcomes will occur, and, where necessary, the step by step processes leading to those outcomes. That is, it describes how to ask for the service and what happens when the service is carried out. For nontrivial

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 104 of 191

services (those composed of several steps over time), this description may be used by a service-seeking agent in at least four different ways:

1. to perform a more in-depth analysis of whether the service meets its needs; 2. to compose service descriptions from multiple services to perform a specific task; 3. during the course of the service enactment, to coordinate the activities of the different participants; and 4. to monitor the execution of the service.

A service grounding ("grounding" for short) specifies the details of how an agent can access a service. Typically a grounding will specify a communication protocol, message formats, and other service-specific details such as port numbers used in contacting the service. In addition, the grounding must specify, for each semantic type of input or output specified in the ServiceModel, an unambiguous way of exchanging data elements of that type with the service (that is, the serialization techniques employed). OWL-S provides one important foundation for the efforts of the Semantics Web Services Language (SWSL) committee[2] of the Semantic Web Services Initiative (SWSI[3]). SWSI is a collaborative international effort towards the development of Semantic Web Services technology. OWL-S certainly represents to date the most mature and most widely accepted initiative in the field of service ontologies.

References

1. OWL-S: Semantic Markup for Web Services, http://www.w3.org/Submission/OWL- S/ 2. Semantic Web Services Language committee , http://www.daml.org/services/swsl 3. Semantic Web Services Initiative, http://www.swsi.org/

Other Links

http://www.ai.sri.com/daml/services/owl-s/1.2/

4.2.54 Platform for Privacy Preferences Project. W3C P3P[1] allow Web sites to declare their privacy practices in a standard and machine- readable XML format know as P3P policy. It has been developed by the World Wide Web Consortium W3C, supports data handling policies in Web based transactions, allows users to automatically understand and match server practices against their privacy preferences

References

1. Home, http://www.w3.org/TR/P3P

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 105 of 191

4.2.55 Process Definition for Java. JCP The Process Definition for Java was an initiative of the JCP that aimed to standardize the automation of business processes on a J2EE server. Specifically, the expert group was in charge of defining metadata, interfaces, and a runtime model to allow business processes to be easily and rapidly implemented using Java and deployed in J2EE containers. The specification was announced in 2003, but since then, there has been no recent development and public documentation. It seems that some of the experts in the JCP group have been involved in the definition of more successful de jure and de facto languages for process definition and based on XML such as BPEL, XPDL, and jPDL.

References

1. JCP Process Definition for Java, http://jcp.org/en/jsr/detail?id=207

Other Links

BPEL, http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wsbpela XPDL, http://www.wfmc.org/standards/xpdl.htm jPDL, http://docs.jboss.org/jbpm/v3/userguide/jpdl.html

4.2.56 Programming language with privacy-preserving features Sython[1] provides an approach to use sensitive data during the development and testing cycle. Sython is an extension to the Python programming language that adds private data- types. Private data-types keep their contents secret not from the user, but from the developer. This allows the outsourcing of software development that deals with sensitive data to potentially not trusted parties without compromising the sensitive information on which Sython operates.

References

1. Home, http://www.seas.gwu.edu/~simhaweb/software/sython/index.php

4.2.57 RDFa. W3C The web is a rich, distributed repository of interconnected information organized primarily for human consumption. On a typical web page, an HTML author might specify a headline, then a smaller sub-headline, a block of italicized text, a few paragraphs of average-size text, and, finally, a few single-word links. Web browsers will follow these presentation instructions faithfully. However, only the human mind understands that the headline is, in NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 106 of 191

fact, the blog post title, the sub-headline indicates the author, the italicized text is the article's publication date, and the single-word links are categorization labels. The gap between what programs and humans understand is large.

On the left, what browsers see. On the right, what humans see. Can we bridge the gap so browsers see more of what we see? What if the browser received information on the meaning of a web page's visual elements? A dinner party announced on a blog could be easily copied to the user's calendar, an author's email address to the user's address book. Users could automatically recall previously browsed articles according to categorization labels (often called tags). A photo copied and pasted from a web site to a school report would carry with it a link back to the photographer, giving her proper credit. When web data meant for humans is augmented with hints meant for computer programs, computers become significantly more helpful, because they begin to understand more of the data's meaning. RDFa[1] allows HTML authors to do just that. Using a few simple HTML attributes, authors can mark up human-readable data with machine-readable indications for browsers and other programs to interpret. A web page can include markup for items as simple as the title of an article, or as complex as a user's complete social network. RDFa benefits from the extensive power of RDF, the W3C's standard for interoperable machine-readable data.

References

1. RDFa, http://www.w3.org/2006/07/SWD/RDFa/primer

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 107 of 191

4.2.58 RMI-SSL (Remote Method Invocation). SUN RMI[1] is a distributed object model and enables the development of distributed Java applications in which the methods of remote objects can be invoked from other Java virtual machines, possibly on remote hosts. The communication in RMI is based on the notion of distributed objects and follows the object paradigm. Additional information can be found in [2].

References

1. Home, http://java.sun.com/javase/technologies/core/basic/rmi/index.jsp 2. Journal of Systems and Software 75 (2005) 179–187 ―An agent-based inter- application information flow control model (Shih-Chien Chou)

4.2.59 RSS RSS [1][2] stands for RDF Site Summary or Really Simple Syndication (depending on the version of RSS). RSS is a lightweight multipurpose extensible metadata description and syndication format and it is based on XML. RSS allows website owners to describe their content in chunks and to publish it in a so called RSS-feed. Those RSS feeds may be used to integrate information into web portals or to add information into third party websites. It may also be used by software programs (RSS-Readers) to update users about changes of websites. This allows users to subscribe to different RSS feeds and to be informed whenever a website is changed. Websites supporting this concept are often marked with a specific RSS icon . The following screenshot shows an example of the RSS feed of the NEXOF Wiki being viewed within an RSS reader:

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 108 of 191

RSS has been defined in several versions, some of them being based on the RDF framework of the W3C and some of them are published in an own XML syntax. In addition to this, the ATOM format[3] is often used for similar purposes.

References

1. RSS 2.0 Specification, http://www.rssboard.org/rss-specification 2. RSS 1.0 Specification, http://web.resource.org/rss/1.0/spec 3. ATOM Specification, http://tools.ietf.org/html/rfc4287

4.2.60 SCXML. W3C SCXML [1] is a general-purpose event-based state machine language that can be used in many ways, including:

As a high-level dialog language controlling VoiceXML 3.0's encapsulated speech modules (voice form, voice picklist, etc.) As a voice application metalanguage, where in addition to VoiceXML 3.0 functionality, it may also control database access and business logic modules. As a multimodal control language in the MultiModal Interaction framework [W3C MMI], combining VoiceXML 3.0 dialogs with dialogs in other modalities including keyboard and mouse, ink, vision, haptics, etc. It may also control combined modalities such as lip-reading (combined speech recognition and vision) speech input with keyboard as fallback, and multiple keyboards for multi-user editing. As the state machine framework for a future version of CCXML. As an extended call centre management language, combining CCXML call control functionality with computer-telephony integration for call centres that integrate telephone calls with computer screen pops, as well as other types of message exchange such as chats, instant messaging, etc. As a general process control language in other contexts not involving speech processing.

SCXML combines concepts from CCXML and Harel State Tables. CCXML [W3C CCXML 1.0] is an event-based state machine language designed to support call control features in Voice Applications (specifically including VoiceXML but not limited to it). The CCXML 1.0 specification defines both a state machine and event handing syntax and a standardized set of call control elements. Harel State Tables are a state machine notation that was developed by the mathematician David Harel [Harel and Politi] and is included in UML [UML 2.0]. They offer a clean and well-thought out semantics for sophisticated constructs such as a parallel states. They have been defined as a graphical specification language, however, and hence do not have an XML representation. The goal of SCXML is to combine Harel semantics with an XML syntax that is a logical extension of CCXML's state and event notation.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 109 of 191

References

1. State Chart extensible Markup Language (SCXML), http://www.w3.org/TR/scxml/

4.2.61 Security Assertion Mark-up Language (SAML). OASIS SAML[1] is an OASIS specification that allows business entities to make assertions regarding the identity, attributes, and entitlements of a subject (an entity that is often a human user) to other entities, such as a partner company or another enterprise application. SAML is a flexible and extensible protocol designed to be used and customized if necessary by other standards. The Liberty Alliance, the Internet2 Shibboleth project, and the OASIS Web Services Security (WS-Security) committee have all adopted SAML as a technological underpinning for various purposes.

References

1. Home, http://www.oasis-open.org/committees/security

4.2.62 Semantic Annotation for WSDL (SAWSDL). W3C The Web Services Description Language (WSDL) specifies a way to describe the abstract functionalities of a service and concretely how and where to invoke it. The WSDL 2.0 [1] W3C Recommendation does not include semantics in the description of Web services. Therefore, two services can have similar descriptions while meaning totally different things, or they can have very different descriptions yet similar meaning. Resolving such ambiguities in Web services descriptions is an important step toward automating the discovery and composition of Web services — a key productivity enabler in many domains including business application integration. In 2006, the W3C created a charter for the Semantic Annotation of Web Services (SAWSDL [2]), which used WSDL-S [3] as its primary input. SAWSDL became a W3C candidate recommendation in January 2007. SAWSDL defines mechanisms using which semantic annotations can be added to WSDL components. SAWSDL defines how to add semantic annotations to various parts of a WSDL document such as input and output message structures, interfaces and operations. The extension attributes defined in this specification fit within the WSDL 2.0 extensibility framework. It provides mechanisms by which concepts from the semantic models that are defined either within or outside the WSDL document can be referenced from within WSDL components as annotations. These semantics when expressed in formal languages can help disambiguate the description of Web services during automatic discovery and composition of the Web services. For example, the specification defines a way to annotate WSDL interfaces and operations with categorization information that can be used to publish a Web service in a registry. The annotations on schema types can be used during Web service discovery and composition. In addition, SAWSDL defines an annotation mechanism for specifying the data mapping of XML Schema types to and from an

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 110 of 191

ontology; such mappings could be used during invocation, particularly when mediation is required. To accomplish semantic annotation, SAWSDL defines extension attributes that can be applied both to WSDL elements and to XML Schema elements. Semantic annotations are references from an element within a WSDL or XML Schema document to a concept in an ontology or to a mapping. This specification defines annotation mechanisms for relating the constituent structures of WSDL input and output messages to concepts defined in an outside ontology. Similarly, it defines how to annotate WSDL operations and interfaces. Further, it defines an annotation mechanism for specifying the structural mapping of XML Schema types to and from an ontology by means of a reference to a mapping definition. The annotation mechanism is independent of the ontology expression language and this specification requires no particular ontology language. It is also independent of mapping languages and does not restrict the possible choices of such languages. The key design principles for SAWSDL are:

The specification enables semantic annotations for Web services using and building on the existing extensibility framework of WSDL. It is agnostic to semantic representation languages. It enables semantic annotations for Web services not only for discovering Web services but also for invoking them.

Based on these design principles, SAWSDL defines the following three new extensibility attributes to WSDL 2.0 elements to enable semantic annotation of WSDL components:

an extension attribute, named modelReference, to specify the association between a WSDL component and a concept in some semantic model. This modelReference attribute can be used especially to annotate XML Schema type definitions, element declarations, and attribute declarations as well as WSDL interfaces, operations, and faults. two extension attributes, named liftingSchemaMapping and loweringSchemaMapping, that are added to XML Schema element declarations and type definitions for specifying mappings between semantic data and XML.

These mappings can be used during service invocation. Fortunately, SAWSDL is agnostic to both the domain model, which gives it a lot of flexibility: domain models can be as simple as agreed upon English-language terms or as complex as expressive ontologies that use formal models such as description logics.

References

1. WSDL2.0,http://www.w3.org/TR/wsdl20/ 2. SAWSDL,http://www.w3.org/TR/sawsdl 3. WSDL-S,http://www.w3.org/Submission/WSDL-S/

Other Links NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 111 of 191

http://www.w3.org/2002/ws/sawsdl/ http://www.w3.org/TR/sawsdl-guide/

4.2.63 Service Availability Forum Specifications. SA Forum This is a set of specifications[1] managed by the Service Availability Forum (SA Forum). Today, the dependability of the global communication infrastructure is more important than ever. As new technologies emerge to power new services, users quickly become dependent on those services to conduct their personal and professional lives. With this development, comes the challenge of accommodating growth and emerging technologies while maintaining uninterrupted availability and dependability. The Service Availability Forum was formed to develop the missing standard interfaces necessary to enable the delivery of highly available carrier-grade systems with off-the-shelf hardware platforms, middleware and service applications. By standardizing the interfaces for systems requiring to implement high levels of service availability the SA Forum aims to help drive to a new open world for service availability. The SA Forum specifications enable the implementation of this carrier-grade systems and services built with commercial off-the-shelf building blocks. This modular architecture, built on open standard hardware and software, allows for greater reuse and a much quicker turnaround for new product introductions. The architecture is depicted in the following figure:

The specifications are detailed in the following sections.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 112 of 191

4.2.63.1 Hardware Platform Interface Specification The SAF Hardware Platform Interface (HPI)[2][3] specifies a generic mechanism to monitor and control highly available systems. The ability to monitor and control these systems is provided through a consistent, platform independent set of programmatic interfaces. The HPI specification provides data structures and functional definitions that can be used to interact with manageable subsets of a platform or system. The HPI allows applications and middleware (―HPI User‖) to access and manage hardware components via a standardized interface. Its primary goal is to allow for portability of HPI User code across a variety of hardware platforms by separating the hardware from management middleware and makes each independent of the other. The HPI model includes four basic concepts:

Entities represent the physical components of the system. Each entity has a unique identifier, called an entity path, which is defined by the component‘s location in the physical containment hierarchy of the system. An entity‘s manageability is modelled in HPI by management instruments and management capabilities contained in one or more resources. Resources provide management access to the entities within the system. Frequently, resources represent functions performed by a local control processor used for management of the entity‘s hardware. Each resource is responsible for presenting a set of management instruments and management capabilities to the HPI User. Resources may be dynamically added and removed in a system as hot- swappable system components that include management capabilities are added and removed. Domains provide access to sets of resources. Each domain also provides information about the resources that are accessible through that domain. Many systems may have only a single domain, whereas systems that have areas dedicated to separate tasks, for example, may manage these through separate domains. Sessions provide all access to an HPI implementation by HPI users. An HPI session is opened on a single domain; one HPI User may have multiple sessions open at once, and there may be multiple sessions open on any given domain at once. It is intended that, in future releases, access control to the HPI will be performed at the session level; thus different sessions may have different access control. Sessions also provide access to events created or forwarded by the domain accessed by the session.

AdvancedTCA is a series of industry standard specifications for the next generation of carrier grade communications equipment. As the largest specification effort in PICMG's history and with more than 100 companies participating, AdvancedTCA incorporates the latest trends in high speed interconnect technologies, next generation processors, and improved reliability, manageability and serviceability, resulting in a new blade (board) and chassis (shelf) form factor optimized for communications. AdvancedTCA provides standardized platform architecture for carrier-grade telecommunication applications, with support for carrier-grade features such as NEBS, ETSI, and 99.999% availability.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 113 of 191

The purpose of the HPI-to-AdvancedTCA Mapping Specification[4] is to expose AdvancedTCA Shelf Management functionality and data in a standard, vendor- independent manner via the Service Availability Forum‘s (SAF) Hardware Platform Interface (HPI).

4.2.63.2 Application Interface Specification (AIS) Application Interface Specification (AIS)[5][6] standardizes the interface between Service Availability Forum compliant High Availability (HA) middleware and service applications. AIS will lower development costs and accelerate time-to-market by enabling and ensuring portability, Service Availability middleware solutions options, and adopter product and services options. ISVs, NEPs, and others adopting the Application Interface Specification will speed and simplify development and enable solutions composed of portable, open, carrier-grade "Building Blocks". The adoption allows:

Reduced time-to-market and development costs Enhanced portability and integration capabilities Improved scalability for fault monitoring and management Increased resources focused on innovation of solutions Limits technology risk through choice of compatible COTS components

This specification includes the following sub-specifications described below.

AIS Availability Management Framework The Availability Management Framework[7] (sometimes also called the AM Framework or simply the Framework) is the software entity that ensures service availability by coordinating other software entities within a cluster. The Availability Management Framework provides a view of one logical cluster that consists of a number of cluster nodes. These nodes host various resources in a distributed computing environment. The Availability Management Framework provides a set of APIs to enable highly available applications. In addition to component registration and life cycle management, it includes functions for error reporting and health monitoring. The Availability Management Framework also assigns active or standby workloads to the components of an application as a function of component state and system configuration. The Availability Management Framework configuration allows prioritization of resources and provides for a variety of redundancy models. The Availability Management Framework also provides APIs for components to track the assignment of work or so-called component service instances among the set of components protecting the same component service instance.

AIS Checkpointing Service The Checkpoint Service[8] provides a facility for processes to record checkpoint data incrementally, which can be used to protect an application against failures. When recovering from fail-over or switch-over situations, the checkpoint data can be retrieved,

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 114 of 191

and execution can be resumed from the state recorded before the failure. Checkpoints are cluster-wide entities that are designated by unique names. A copy of the data stored in a checkpoint is called a checkpoint replica; for performance reasons, a checkpoint replica is typically stored in main memory rather than on disk. A checkpoint may have several checkpoint replicas stored on different nodes1 in the cluster to protect it against node failures. To avoid accumulation of unused checkpoints in the system, checkpoint replicas have a retention time. When a checkpoint has not been opened by any process for the duration of the retention time, the Checkpoint Service automatically deletes the checkpoint. AIS Information Model Management Service The objects in the Information Model[9] are provided with their attributes and administrative operations (that is, operations that can be performed on the represented entities through system management interfaces). For management applications or Object Managers, the IMM provides the APIs to create, access, and manage these objects. The IMM Service delivers the requested operations to the appropriate AIS Services or applications (referred to as Object Implementers) that implement these objects for execution. Information Model objects and attributes can be classified into two categories: 1) Configuration objects and attributes 2)Runtime objects and attributes. AIS Security Service The SA Forum Security Service[10] is concerned with providing the mechanisms to mediate the access and use of the various AIS Services. The SA Forum Security Service is not concerned with providing mechanisms such as encryption for the data that passes through those services, such as in checkpoints or message queues. However, the users of those services may certainly apply such mechanisms, if desired. In more detail, the Security Service allows the AIS Services to authorize particular activities for certain processes within the cluster. This authorization is necessary to protect the HA infrastructure from misuse. In addition, it avoids the unauthorized access to an SA Forum application and its managed data and preserves its integrity. It should also be noted that the Security Service shall be agnostic to underlying crypto algorithms and libraries, if any. These shall be used in a transparent way (as much as possible) by the SA Forum Security Service (or other AIS Services when necessary) to provide adequate security to AIS Service client processes. AIS Software Management Framework SA Forum systems[11] are required to provide highly available services to their users over a long period of time during which the systems may undergo changes due to growth and evolution, bug fixes, or enhancement of services. These changes may require addition, removal, replacement, or reconfiguration of hardware or software elements. High service availability requires that such changes cause no (or only minimal) loss of service. The different kinds of software that execute on an SA Forum system can be classified as firmware, system software (including hypervisors, operating systems, and middleware), and application software. Such software is constituted of binary or interpreted code that can be executed on the system along with some provisioning data that is required for the software to execute properly. Some SA Forum Services such as the Availability Management Framework are responsible for controlling the execution of the software on the system (e.g. application software, in the case of the Availability Management

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 115 of 191

Framework). Each SA Forum Service defines its own set of logical entities that are used to either (1) represent instances of software execution under its control or (2) describe the management policies and relationships among these various execution instances. AIS Cluster Membership Service The Cluster Membership Service[12] provides applications with membership information about the nodes that have been administratively configured in the cluster configuration (these nodes are also called cluster nodes or configured nodes), and it is core to any clustered system. A cluster consists of this set of configured nodes, each with a unique node name. The Cluster Membership Service also allows application processes to register a callback function to receive membership change notifications as those changes occur. AIS Event Service The Event Service[13] is a publish/subscribe multipoint-to-multipoint communication mechanism that is based on the concept of event channels. One or more publishers communicate asynchronously with one or more subscribers by means of events over a cluster-wide entity named event channel. AIS Lock Service The Lock Service[14] is a distributed lock service, which is intended for use in a cluster where processes in different nodes (as defined in the Custer Membership Service specification) might compete with each other for access to a shared resource. The Lock Service provides a simple lock model supporting two locking modes for exclusive access and shared access (each one supporting different features (synchronous and asynchronous calls, lock timeout, trylock, and lock wait notifications). The locks provided by the Lock Service are not recursive (each lock must be claimed individually). AIS Log Service Logging information[15] is a high-level cluster-significant, function-based (as opposed to implementation-particular) information suited primarily for network or system administrators, or automated tools to review current and historical logged information to trouble shoot issues such as misconfigurations, network disconnects and unavailable resources. An SA Forum compliant ecosystem assumes the AIS Log Service, or some functionally equivalent service is available for use by applications as well as other AIS services (e.g. SA Forum Notification Service). Within the SA Forum Log Service boundary, there are two main objects internal to the Log Service: (1) Log streams (a conceptual flow of log records. There are four distinct log stream types: alarm, notification, system, and application) and (2) Log records (an ordered set of information logged by some process). The Log Service also offers the APIs to access the log functionality. AIS Message Service The Message Service[16] specifies a buffered message-passing system based on the concept of a message queue for processes on the same or on different nodes1. Messages are written to message queues and read from them. A single message queue permits a multipoint-to-point communication. Message queues are persistent or nonpersistent. The Message Service must preserve messages that have not yet been consumed when the message queue is closed. Message queues can be grouped together to form message queue groups. Message queue groups permit multipoint-to-multipoint communication.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 116 of 191

They are identified by logical names, so that a process is unaware of the number of message queues and of the physical location of the message queues to which it is communicating. The sender addresses message queue groups by using the same mechanisms that it uses to address single message queues. The message queue groups can be used to distribute messages among message queues pertaining to the message queue group. Message queue groups can be used to maintain transparency of the sender process to faults in the receiver processes, represented by the message queues in the message queue groups. With message queues, the Message Service uses the model of n senders to 1 receiver whereas with message queue groups, the Message Service uses the model of m senders to n receivers.

AIS Naming Service The Naming Service[17] provides a mechanism by which human-friendly names are associated with (‗bound to‘) objects so that these objects can be looked up given their names. The objects typically represent service access points, communication endpoints and other resources that provide some sort of service. The Naming Service imposes neither a specific layout nor a convention on either the names or the objects to which they are bound. It allows the users of the service to select and use their own naming schema without assuming any specific hardware or logical software configuration. The clients of the Naming Service are expected to understand the structure, layout and semantics of the object bindings they intend to store inside and retrieve from the service. The Naming Service caters to two categories of clients; (1) Service provider: software entities that wish to advertise objects bound to a name and (2) Service user: software entities that use these names to lookup corresponding bound objects.

AIS Notification Service The Notification Service[18] is used by a service-user to report an event to a peer service- user. It is defined as a non-confirmed service. Event here means the same as in commonly understood English - an incident, or simply, a change of status (Note: In order to avoid confusion with the Event Service, the term notification is used here). This service is based to a great degree in the ITU-T recommendations X.700 - X.799 -that deal with the area of system management and how it may be applied to a communications system- but also needs many other supportive recommendations that include, for example, the concepts of managed objects, which are covered in Structure of Management Information.

AIS Timer Service The Timer Service[19] provides a mechanism by which client processes get notified when a timer expires. A timer is a logical object that is dynamically created and represents either absolute time or a duration. The Timer Service provides two types of timers: (1) Single event timers will expire once and are deleted after notification and (2) Periodic timers will expire each time a specified duration is reached, and the process is notified about the expirations. Periodic timers have to be explicitly deleted.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 117 of 191

References

1. SA-Forum Specifications (Rel.5) Home, http://www.saforum.org/specification/ 2. HPI Home, http://www.saforum.org/specification/HPI_Specification/ 3. HPI Specification, http://www.saforum.org/specification/getspec_content/SAF- HPI_B.02.01_2006-12-13e.pdf 4. HPI to ATCA Mapping Specification, http://www.saforum.org/specification/getspec_content/SAIM-HPI-B.01.01-ATCA.zip 5. AIS Home, http://www.saforum.org/specification/AIS_Information/ 6. AIS Specification, http://www.saforum.org/specification/download/ 7. Availability Management Framework Specification, http://www.saforum.org/specification/getspec_content/aisAmf.B0301.pdf 8. Checkpoint Service Specification, http://www.saforum.org/specification/getspec_content/aisCkpt.B0202.pdf 9. Information Model Management Service Specification, http://www.saforum.org/specification/getspec_content/aisImm.A0201.pdf 10. Security Service Specification, http://www.saforum.org/specification/getspec_content/aisSec.A0101.pdf 11. Software Management Framework Specification, http://www.saforum.org/specification/getspec_content/aisSmfA0101.pdf 12. Cluster Membership Specificification, http://www.saforum.org/specification/getspec_content/aisClm.B0301.pdf 13. Event Service Specification, http://www.saforum.org/specification/getspec_content/aisEvt.B0301.pdf 14. Lock Service Specification, http://www.saforum.org/specification/getspec_content/aisLck.B0301.pdf 15. Logging Information Specification, http://www.saforum.org/specification/getspec_content/aisLog.A0201.pdf 16. Message Service Specification, http://www.saforum.org/specification/getspec_content/aisMsg.B0301.pdf 17. Naming Service Specification, http://www.saforum.org/specification/getspec_content/aisNam.A0101.pdf 18. Notification Service Specification, http://www.saforum.org/specification/getspec_content/aisNtf.A0201.pdf 19. Timer Service Specification, http://www.saforum.org/specification/getspec_content/aisTmr.A0101.pdf

4.2.64 Simple Network Management Protocol (SNMP) v3. IETF SNMP[1] forms part of the internet protocol suite as defined by the Internet Engineering Task Force (IETF). SNMP is used in network management systems (NMS) to monitor network-attached devices for conditions that warrant administrative attention. It consists of a set of standards for network management, including an Application Layer protocol, a database schema, and a set of data objects. SNMP is the de facto standard

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 118 of 191

communications protocol supporting integrated network management in heterogeneous environments. An SNMP-managed network consists of four key components:

Managed devices - A managed device is a network node that contains an SNMP agent and that resides on a managed network. Managed devices collect and store management information and make this information available to NMSs using SNMP. Managed devices, sometimes called network elements, can be any type of device including, but not limited to, routers, access servers, switches, bridges, hubs, IP telephones, computer hosts, and printers. Agents - An agent is a network-management software module that resides in a managed device. An agent has local knowledge of management information and translates that information into a form compatible with SNMP. Network-management systems (NMSs) - A NMS executes applications that monitor and control managed devices. NMSs provide the bulk of the processing and memory resources required for network management. One or more NMSs may exist on any managed network. Management information base (MIB) - A MIB is a collection of managed objects residing in a virtual information store. Collections of related managed objects are defined in specific MIB modules. A MIB can be depicted as an abstract tree with an unnamed root (See figure below). Individual data items make up the leaves of the tree. Object identifiers (IDs) uniquely identify or name MIB objects in the tree. Object IDs are like telephone numbers - they are organized hierarchically with specific digits assigned by different organizations.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 119 of 191

Interactions between the NMS and managed devices can be any of four different types of commands:

Reads - To monitor managed devices, NMSs read variables maintained by the devices Writes - To control managed devices, NMSs write variables stored within the managed devices Traversal operations - NMSs use these operations to determine which variables a managed device supports and to sequentially gather information from variable tables (such as IP routing tables) in managed devices Traps - Managed devices use traps to asynchronously report certain events to NMSs

There are several open-source implementations:

Net-SNMP [2] Net-SNMPJ [3] NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 120 of 191

Open-SNMP [4] SNMP4J [5]

References

1. Specification, http://tools.ietf.org/html/rfc3414 2. Net-SNMP, http://www.net-snmp.org/ 3. Net-SNMPJ, http://netsnmpj.sourceforge.net/ 4. Open-SNMP, http://sourceforge.net/projects/opensnmp/ 5. SNMP4J, http://www.snmp4j.org/

4.2.65 SOAP 1.1/1.2. W3C SOAP is a lightweight protocol intended for exchanging structured information in a decentralized, distributed environment based on XML consisting of three parts:

an envelope that defines a framework for describing what is in a message and how to process it, a set of encoding rules for expressing instances of application-defined datatypes, and a convention for representing remote procedure calls and responses.

A SOAP message contains the following elements

a required envelope which identifies it as a SOAP message an optional header an required body which contains the actual message an optional fault

SOAP can be used with any protocol but in order to circumvent firewalls, it is often used with HTTP or HTTPS in the context of Web Services. SOAP 1.1 is a W3C note and SOAP 1.2 is W3C recommendations.

References 1. SOAP 1.2, part1 http://www.w3.org/TR/soap12-part1/ 2. SOAP 1.2, part2 http://www.w3.org/TR/soap12-part2/ 3. SOAP 1.1, http://www.w3.org/TR/soap/

4.2.66 SOAP Message Security. OASIS This OASIS specification[1] describes enhancements to SOAP messaging to provide message integrity and confidentiality. The specified mechanisms can be used to

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 121 of 191

accommodate a wide variety of security models and encryption technologies. This specification also provides a general-purpose mechanism for associating security tokens with message content. No specific type of security token is required, the specification is designed to be extensible (i.e.. support multiple security token formats).

References

1. Home, http://www.oasis-open.org/committees/wss/

4.2.67 SPECjAppServer2004. SPEC SPECjAppServer2004[1] is the current version of the ECPerf benchmark for J(2)EE application servers. It is currently developped by the Standard Performance Evaluation Corporation (SPEC)[2]. SPECjAppServer2004 is a multi-tier benchmark for measuring the performance of Java 2 Enterprise Edition (J2EE) technology-based application servers. The SPECjAppServer2004 workload emulates an automobile manufacturing company and its associated dealerships. Dealers interact with the system using web browsers (simulated by a benchmark's application) while the actual manufacturing process is accomplished via RMI (also driven by a benchmark application). This workload stresses the ability of Web and EJB containers to handle the complexities of memory management, connection pooling, passivation / activation, caching, etc. This way, all major J2EE technologies implemented by compliant application servers are excercized and stressed:

The web container, including servlets and JSPs The EJB container EJB2.0 Container Managed Persistence JMS and Message Driven Beans Transaction management Database connectivity

Moreover, SPECjAppServer2004 also heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

References

1. SPECjAppServer2004 Home, http://www.spec.org/jAppServer2004/ 2. SPEC Home, http://www.spec.org/

4.2.68 Storage Management Initiative (SMI-S). SNIA SMI-S[1][2] specification from the Storage Networking Industry Association (SNIA), defines a method for the interoperable management of a heterogeneous Storage Area Network NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 122 of 191

(SAN), and describes the information available to a WBEM Client from an SMI-S compliant CIM Server and an object-oriented, XML-based, messaging-based interface designed to support the specific requirements of managing devices in and through SANs. SMI-S defines DMTF's[3] Common Information Model (CIM) management profiles for storage systems. The complete SMI Specification is categorised in profiles and sub- profiles. A profile describes the behavioral aspects of an autonomous, self-contained management domain. SMI-S includes profiles for Arrays, Switches, Storage Virtualizer, Volume Management and many other domains. In DMTF parlance, a provider is an implementation for a specific profile. A sub-profile describes part of the domain, which can be common part in many profiles. At a very basic level, SMI-S entities are divided into two categories:

Clients are management software applications that can reside virtually anywhere within a network provided they have a physical link (either within the data path or outside the data path) to providers. Clients can be host-based management applications (e.g., storage resource management, or SRM), enterprise management applications, or SAN appliance-based management applications (e.g., virtualization engines). Servers are the devices under management within the storage fabric. Servers can be disk arrays, host bus adapters, switches, tape drives, etc.

There are several open-source implementations such as OpenPegasus[4] and Aperi[5].

References

1. Home, http://www.snia.org/tech_activities/standards/curr_standards/smi 2. Specification v1.2, http://www.snia.org/tech_activities/standards/curr_standards/smi/SMI- S_Technical_Position_v1.2.0r6.zip 3. DMTF, www.dmtf.org 4. Home, http://openpegasus.com/ 5. Aperi, http://www.eclipse.org/aperi/

4.2.69 SVG. W3C SVG 1.1 [1] is a modularized language for describing two-dimensional vector and mixed vector/raster graphics in XML. SVG is a platform for two-dimensional graphics. It has two parts: an XML-based file format and a programming API for graphical applications. Key features include shapes, text and embedded raster graphics, with many different painting styles. It supports scripting through languages such as ECMAScript and has comprehensive support for animation. SVG is used in many business areas including Web graphics, animation, user interfaces, graphics interchange, print and hardcopy output, mobile applications and high-quality design.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 123 of 191

SVG is a royalty-free vendor-neutral open standard developed under the W3C Process. It has strong industry support; Authors of the SVG specification include Adobe, Agfa, Apple, Canon, Corel, Ericsson, HP, IBM, Kodak, Macromedia, Microsoft, Nokia, Sharp and Sun Microsystems. SVG viewers are deployed to over 100 million desktops, and there is a broad range of support in many authoring tools. SVG builds upon many other successful standards such as XML (SVG graphics are text- based and thus easy to create), JPEG and PNG for image formats, DOM for scripting and interactivity, SMIL for animation and CSS for styling. SVG is interoperable. The W3C release a test suite and implementation results to ensure conformance. Example rect01 shows a rectangle with sharp corners. The 'rect' element is filled with yellow and stroked with navy. Example rect01 - rectangle with sharp corners

References

1. Scalable Vector Graphics 1.1, http://www.w3.org/TR/SVG11/

4.2.70 Systems Management Architecture for Server Hardware (SMASH). DMTF Until now there have been no cross-platform standards that let network administrators directly manage servers from multiple vendors. This led hardware manufacturers to develop varied tool sets to manage in-band and out-of-band traffic for different operating systems and system states. Today's multi-vendor data centers contain an inefficient array of management commands and tools.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 124 of 191

To address this, the Distributed Management Task Force (DMTF)[1] recently announced details of its Systems Management Architecture for Server Hardware suite, including the SMASH Command Line Protocol (CLP) specification. SMASH CLP enables simple and intuitive management of heterogeneous servers in data centers independent of machine state, operating system state, server system topology or access method. Building on the DMTF's Common Information Model schema, SMASH CLP provides a "lightweight" command-line syntax; it lets different vendors' systems be represented in similar ways. Server vendors' products, including stand-alone servers, blades, racks and partitions, will be able to support SMASH CLP commands. With these SMASH CLP- enabled products, users on a management station or a client will be able to execute common operations - such as system power on and off, system log display, boot order configuration and text-based remote console - using the same commands across disparate vendor platforms. SMASH CLP is a command/response specification (executed by a user or in an automated fashion by a script) transmitted and received over a text message-based transport protocol. The SMASH CLP syntax is explicitly defined, with selectable formats. Options include free-form text, comma form text, comma-separated, keyword=value and XML. In this simple interface, users navigate a directory-like hierarchy of command targets. The text command message is transmitted from the user over the transport protocol to the server. SMASH CLP commands are transmitted and received between the two in the same way regardless of server platform - a breakthrough in simplified server management. In addition to providing the protocol, the SMASH CLP specification will include server profiles spanning the spectrum of stand-alone servers, blades, racks and partitions, addressing enterprise and telco environments. The user-friendly views provided in the profiles are defined to simplify managing system boot, power, storage, driver firmware and software, system configuration and hardware product assets. The SMASH CLP interface provides a uniform command set for controlling hardware in heterogeneous environments, helping reduce management complexity. SMASH CLP also enables the development of common scripts to increase data center automation, which can help to significantly reduce management costs.

References

1. Specification, http://www.dmtf.org/standards/mgmt/smash/

4.2.71 System Management BIOS (SMBIOS). DMTF The SMBIOS Specification[1][2][3] of the Distributed Management Task Force addresses how motherboard and system vendors present management information about their products in a standard format by extending the BIOS interface on x86 architecture systems. The information is intended to allow generic instrumentation to deliver this information to management applications that use DMI, CIM (the WBEM data model) or direct access, eliminating the need for error prone operations like probing system hardware for presence detection.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 125 of 191

The specification is intended to provide enough information that BIOS developers may implement the necessary extensions to allow the hardware on their products and other system-related information to be accurately determined by users of the defined interfaces. In addition, in cases where the implementer has provided write access to non-volatile storage on the system, some information may be updated by management applications after a system is deployed in the field to record data that persists between system starts. The specification is also intended to provide enough information for developers of management instrumentation to develop generic routines for translating from SMBIOS format to the format used by their chosen management technology whether it is a DMTF technology like DMI or CIM, or another technology. To support this translation for DMTF technologies, sections of this specification describe the DMI groups and CIM classes intended to convey the information retrieved from an SMBIOS-compatible system through the interfaces described in the document. SMBIOS 2.5 (Current) includes revisions to the standard to address the evolving hardware architecture, including updates to processor information and device descriptions to reflect current technology.

References

1. Home,http://www.dmtf.org/standards/smbios/ 2. Current specification v2.5, http://www.dmtf.org/standards/published_documents/DSP0134v2.5Final.pdf 3. Specification v2.6 (Preliminary, http://www.dmtf.org/standards/published_documents/DSP0134.pdf

4.2.72 TPC-App. TPC TPC-App[1] is a benchmark from the Transaction Processing Performance Council. It comprises a set of basic operations designed to exercise Transactional application server functionality in a manner representative of business-tobusiness Web service environments. These basic operations have been given a real-life context, portraying the business activity of a distributor that supports user online ordering and browsing activity. This is intended to help users relate intuitively to the components of the benchmark. The workload is centered on business logic involved with processing orders and retrieving product catalog items and provides a logical database design. The workload was designed specifically to stress the Application Server. It exercises commercially available application server products, messaging products, and databases associated with such environments. The workload is performed in a managed environment that simulates the activities of a business-to-business transactional application server operating in a 24x7 environment. As such, the work to be performed by the database was purposely minimized. Additionally, the application was designed such that it would cluster in a manner that is as nearly linear as possible. All application server SYSTEMS are required to have identical hardware and software configurations. The workload is then distributed across all application server SYSTEMS. TPC-App does not permit specialized

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 126 of 191

application server SYSTEMS that do not perform all of the application server SYSTEM requirements. TPC-App does not benchmark the logic needed to process or display the presentation layer (for example, HTML) to the clients. The clients in TPC-App represent businesses that utilize Web services in order to satisfy their business needs. TPC-App does not represent the activity of any particular business segment, but rather any industry that must market and sell a product or service over the Internet via Web services (e.g., retail store, software distribution, airline reservation, etc.). TPC-App does not attempt to be a model of how to build an actual application. The purpose of this benchmark is to retain the application's essential performance characteristics, namely: the level of system utilization and the complexity of operations, while reducing the diversity of operations found in Application Servers. A large number of functions have to be performed to manage an environment that supports order processing and browsing functions. TPC-App includes a representative set of these functions. Many other functions are not of primary interest for performance analysis, since they are proportionally small in terms of system resource utilization or in terms of frequency of execution. Although these functions are vital for a production system, they merely create unnecessary diversity in the context of a standard benchmark and have been omitted in TPC-App. The application portrayed by the benchmark is a retail distributor on the Internet with ordering and product browsing scenarios. The application accepts incoming Web Service Requests from other businesses (or a store front) to place orders, view catalog items and make changes to the catalog, update or add customer information, or request the status of an existing order. The majority of requests generate order purchase activity with a smaller portion of requesting item catalog information. There are four categories of results. Two concerning clustered systems, Clustered and Clustered-Virtualized and two concerning Non-Clustered systems, Non-Clustered and Non-Clustered-Virtualized. Despite the fact that this benchmark offers a rich environment that emulates many Web service applications, this benchmark does not reflect the entire range of Web service or Application Server requirements. In addition, the extent to which a customer can achieve the results reported by a vendor is highly dependent on how closely TPC-App approximates the customer application.

References

1. TPC-App Specification v.1.3, http://tpc.org/tpc_app/spec/TPC-App_V1.3.pdf

4.2.73 TPC-C. TPC The TPC-C[1] is another benchmark from the Transaction Processing Performance Council that comprises of a set of basic operations designed to exercise system functionalities in a manner representative of complex OLTP application environments. These basic operations have been given a life-like context, portraying the activity of a wholesale

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 127 of 191

supplier, to help users relate intuitively to the components of the benchmark. The workload is centered on the activity of processing orders and provides a logical database design, which can be distributed without structural changes to transactions. TPC-C does not represent the activity of any particular business segment, but rather any industry which must manage, sell, or distribute a product or service (e.g., car rental, food distribution, parts supplier, etc.). TPC-C does not attempt to be a model of how to build an actual application. The performance metric reported by TPC-C is a "business throughput" measuring the number of orders processed per minute. Multiple transactions are used to simulate the business activity of processing an order, and each transaction is subject to a response time constraint. The performance metric for this benchmark is expressed in transactions- per-minute-C (tpmC). To be compliant with the TPC-C standard, all references to TPC-C results must include the tpmC rate, the associated price-per-tpmC, and the availability date of the priced configuration. The purpose of a benchmark is to reduce the diversity of operations found in a production application, while retaining the application's essential performance characteristics, namely: the level of system utilization and the complexity of operations. A large number of functions have to be performed to manage a production order entry system. Many of these functions are not of primary interest for performance analysis, since they are proportionally small in terms of system resource utilization or in terms of frequency of execution. Although these functions are vital for a production system, they merely create excessive diversity in the context of a standard benchmark and have been omitted in TPC-C. Despite the fact that this benchmark offers a rich environment that emulates many OLTP applications, this benchmark does not reflect the entire range of OLTP requirements. In addition, the extent to which a customer can achieve the results reported by a vendor is highly dependent on how closely TPC-C approximates the customer application. The relative performance of systems derived from this benchmark does not necessarily hold for other workloads or environments.

References

1. TPC-C Specification v.5.9, http://tpc.org/tpcc/spec/tpcc_current.pdf

4.2.74 TPC-E. TPC The TPC-E benchmark[1] simulates the OLTP workload of a brokerage firm. The focus of the benchmark is the central database that executes transactions related to the firm‘s customer accounts. Although the underlying business model of TPC-E is a brokerage firm, the database schema, data population, transactions, and implementation rules have been designed to be broadly representative of modern OLTP systems. The purpose of a benchmark is to reduce the diversity of operations found in a production application, while retaining the application's essential performance characteristics so that the workload can be representative of an OLTP production system. The System Under

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 128 of 191

Test is focused on portraying the components found on the sever side of a transaction monitor or application server. The benchmark has been reduced to simplified form of the application environment. To measure the performance of the OLTP system, a simple Driver generates Transactions and their inputs, submits them to the System Under Test, and measures the rate of completed Transactions being returned. A large number of functions have to be performed to manage a production brokerage system. Many of these functions are not of primary interest for performance analysis, since they are proportionally small in terms of system resource utilization or in terms of frequency of execution. Although these functions are vital for a production system, they merely create excessive diversity in the context of a standard benchmark and have been omitted in TPC-E (e.g. all application functions related to User- Interface and Display-Functions have been excluded from the benchmark). The benchmark is ―scalable,‖ meaning that the number of customers defined for the brokerage firm can be varied to represent the workloads of different-size businesses. The benchmark defines the required mix of transactions the benchmark must maintain. The TPC-E metric is given in transactions per second (tps). It specifically refers to the number of Trade-Result transactions the server can sustain over a period of time.

References

1. TPC-E Specification v.1.5.1, http://tpc.org/tpce/spec/TPCE-v1.5.1.pdf

4.2.75 UDDI. OASIS UDDI (Universal Description Discovery and Integration) is an initiative for creating a global registry of services and companies. The specification of UDDI is the output of an industrial- led consortium[1] started in 2000, originally led by IBM, Microsoft and Ariba, and now driven by OASIS. UDDI version 2[2] (in April 2003) and UDDI version 3[3] (in February 2005) were both approved as a formal OASIS standard. UDDI OASIS Standard defines a universal method for enterprises to dynamically discover and invoke Web services. The aim of UDDI is to create a global, platform-independent, open framework to enable businesses to:

discover each other; define how they will interact over the Internet; and share information in a global registry that will rapidly accelerate the global adoption of B2B eCommerce.

UDDI encodes three types of information:

White Pages: Business name and address, contact information, Web site name, and Data Universal Numbering System (DUNS) or other identifying number.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 129 of 191

Yellow Pages: Type of business, location, and products, including various categorization taxonomies for geo-graphical location, industry type, business ID, and so on.

Green Pages: Technical information about business services, such as how to interact with them, business process definitions, and so on. A pointer to the business's WSDL file, if any, would be placed here. Information in this category describes a service's features/functionality, including a unique ID for the service. This category is quite new and specific to the Internet.

UDDI Data Structure Even if the main focus of UDDI is on Web Services, the registries have been designed to be able to manage information about different kind of services. UDDI registries are a sort of yellow pages for services that support publication and automated service discovery. Service Providers can register information about the services they offer with these registries, and this information can then be discovered and accessed by Service Requestors. UDDI has two main parts: registration and discovery. The registration part means that businesses can post information to UDDI that other businesses can search for and discover, which is the other part. UDDI can be considered as extending the functionality provided by SOAP to allow the querying of services and the describing of services. Within the model the business registry is logically centralised, but physically distributed with data replicated across nodes on a regular basis. Starting from the UDDI specification, an UDDI Business registry has been created. This registry is a free and public UDDI registry jointly operated by IBM, Microsoft, NTT Communications, and SAP. Anyone is free to publish information to any of the UBR nodes and to query any of these. The company previously mentioned have realised tool to search for services published on this registry (i.e. Microsoft UDDI Business Registry Node, UDDI@SAP Business Registry). These search tools offer functionality to search services based on the possibility:

to browse by Category, to search by Service Name or partial name, to search by Provider name or its partial name to search service by tModel name or partial name2.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 130 of 191

Every company implementing these searching tools offers a set of APIs allowing any developer to realise its own search tool according to the specific needs. The UDDI was integrated into the Web Services Interoperability (WS-I) standard as a central pillar of web services infrastructure.

Reference

1. http://uddi.xml.org/ 2. Universal Description, Discovery and Integration v3 : http://www.oasis- open.org/specs/index.php#uddiv3.0.2 3. Universal Description, Discovery and Integration v2 : http://www.oasis- open.org/specs/index.php#uddiv2

4.2.76 Web-Based Enterprise Management (WBEM). DMTF WBEM[1] is another specification managed by the Distributed Management Task Force (DMTF). WBEM is a set of management and Internet standard technologies developed to unify the management of distributed computing environments, facilitating the exchange of data across otherwise disparate technologies and platforms. WBEM is extensible, facilitating the development of platform-neutral, reusable infrastructure, tools and applications. In addition to its use by vendors, end users and the open source community, WBEM is enabling other industry organizations to build on its foundation in areas including Web services, security, storage, grid and utility computing. To understand the WBEM architecture, consider the components which lie between the operator trying to manage a device (configure it, turn it off and on, collect alarms, etc.) and the actual hardware and software of the device:

The operator will presumably be presented with some form of graphical user interface (GUI), browser user interface (BUI), or command line interface (CLI). The WBEM standard really has nothing to say about this interface (although a CLI for specific applications is being defined): in fact it is one of the strengths of WBEM that it is independent of the human interface since human interfaces can be changed without the rest of the system needing to be aware of the changes. The GUI, BUI or CLI will interface with a WBEM client through a small set of application programming interfaces (API). This client will find the WBEM server for the device being managed (typically on the device itself) and construct an XML message with the request. The client will use the HTTP (or HTTPS) protocol to pass the request, encoding in CIM-XML, to the WBEM server The WBEM server will decode the incoming request, perform the necessary authentication and authorization checks and then consult the previously-created model of the device being managed to see how the request should be handled. This model is what makes the architecture so powerful: it represents the pivot point of the transaction with the client simply interacting with the model and the model

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 131 of 191

interacting with the real hardware or software. The model is written using the Common Information Model standard and the DMTF has published many models for commonly-managed devices and services: IP routers, storage servers, desktop computers... For most operations, the WBEM server determines from the model that it needs to communicate with the actual hardware or software. This is handled by so-called "providers": small pieces of code which interface between the WBEM server (using a standardised interface known as CMPI) and the real hardware or software. Because the interface is well-defined and the number of types of call is small, it is normally easy to write providers. In particular, the provider writer knows nothing of the GUI, BUI, or CLI being used by the operator.

The following ones are open source implementations of the specification: Pegasus[2], OpenWBEM[3], SBLIM[4], WBEM Services[5], Purgos[6]. Finally WBEM Server[7] provides an rchitecture for integrating management standards. References

1. Specification, http://www.dmtf.org/standards/wbem 2. Open Pegasus, http://www.openpegasus.org/ 3. OpenWBEM, http://www.openwbem.org/ 4. SBLIM, http://sblim.wiki.sourceforge.net/ 5. WBEM Services, http://wbemservices.sourceforge.net/ 6. Purgos, http://www.softulz.net/ 7. WBEM Server, http://www.wbemsolutions.com/products_cwbemserver.html

4.2.77 Web Services Choreography Description Language (WS-CDL). W3C The Web Services Choreography Description Language (WS-CDL) is an XMLbased language that describes peer-to-peer collaborations of participants by defining, from a global viewpoint, their common and complementary observable behaviour. It's focused on composing interoperable, peer-to-peer collaborations between any type of component regardless of the programming model or supporting platform used by the implementation of the hosting environment. WS-CDL could be very useful for applications in a grid context. In this case one requirement is the ability to perform long-lived, peer-to-peer collaborations between the participating services, within or across the trusted domains of an organization. WS-CDL is useful in case organizations plan to carry out a peer-to-peer collaboration based on Web Services technologies in order to perform a business process. So, the chief phase is to define the choreography representing the collaboration. Once reached an agreement on the choreography, each organization can focus on the implementation of the specific processes. References:

1. WS-CDL specification: http://www.w3.org/TR/ws-cdl-10/ NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 132 of 191

Other links:

1. W3C WS Choreography Working Group: http://www.w3.org/2002/ws/chor/ 2. WS-CDL Primer: http://www.w3.org/TR/2006/WD-ws-cdl-10-primer-20060619/

4.2.78 Web Service Choreography Interface WS-CI. W3C The Web Service Choreography Interface (WS-CI) is an XML-based interface description language for defining the flow of messages exchanged by a Web Service joining in choreographed interactions with other services. WS-CI describes the dynamic interface of the Web Service taking part in a given message exchange, reusing the operations defined for a static interface. WS-CI usually works together with the Web Service Description Language (WS-DL), the basis for the W3C Web Services Description Working Group. Sometimes, it can work with other service definition languages, in any case they should be quite similar to WSDL. WS-CI does not direct steps of definition and implementation of the internal processes that actually drive the message exchange, but its main goal is to describe the observable behaviour of a Web Service making use of message-flow oriented interface. So, WSCI describes the flow of messages exchanged by a Web service in a particular process; on the other side it also describes the collective message exchange among interacting Web services, providing a global view of a complex process involving multiple Web services. This last point is very useful in order to fill the gap between business process management and Web services by describing how a Web service can be used as part of a larger and more complex business process.

References

1. Web Service Choreography Interface (WSCI) 1.0, http://www.w3.org/TR/wsci/

4.2.79 Web Service Description Language (WSDL). W3C WSDL is an XML language for describing Web services. It specifies the location of the service and the operations (or methods) the service exposes. Using WSDL, a service provider can describe the expectations and functionality of a single web service in a platform-independent way, so that potential requestors can understand how to correctly interact with the service. In the world of loosely-coupled web services, WSDL plays a central role for interoperability among services implemented in different platforms. It enables one to separate the description of the abstract functionality offered by a service from concrete details of a service description such as ―how‖ and ―where‖ that functionality is offered. The current version of the WSDL specification[1] is the 2.0 (2007 June), but WSDL 1.1 (March 2001)[2] has not been endorsed by the W3C (even if version 2.0 is a W3C

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 133 of 191

recommendation). WSDL 1.2 was renamed WSDL 2.0 because of its substantial differences from WSDL 1.1 WSDL 2.0 is based on the following specifications:

Part1: Core: this specification defines a language for describing the abstract functionality of a service as well as a framework for describing the concrete details of a service description. It also defines the conformance criteria for documents in this language. Part2: Adjuncts: specifies predefined extensions for use in WSDL 2.0, such as Message Exchange Patterns and SOAP modules and defines a language for the description of some concrete details of SOAP1.2 and HTTP.

The description of a service with the WSDL 2.0 specification is based on the following conceptual model:

WSDL 2.0 describes a Web service in two fundamental stages: one abstract and one concrete. Within each stage, the description uses a number of constructs to promote reusability of the description and to separate independent design concerns. In the abstract part, WSDL 2.0 describes a web service in terms of messages it sends and receives through a type system, typically W3C XML Schema. Message exchange patterns define the sequence and cardinality of messages. An operation associates a message exchange pattern with one or more messages. A message exchange pattern identifies the sequence and cardinality of messages sent and/or received as well as who they are logically sent to and/or received from. An interface groups together operations without any commitment to transport or wire format. At a concrete level, a binding specifies transport and wire format details for one or more interfaces. An endpoint associates a network address with a binding. And finally, a service groups together endpoints that implement a common interface.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 134 of 191

From a technology adoption perspective, WSDL 2.0 expands WSDL 1.1 and builds on WS-I Basic Profile improvements in terms of core usage with HTTP/SOAP 1.2 bindings, Message exchange patterns, Fault handling and support for developing REST (Representational State Transfer) based Web applications. WSDL 2.0 also extends support for representing Semantic Annotations for WSDL (SAWSDL) and WS-Policy. The specification of WSDL 2.0 defines an abstract component model that allows WSDL 2.0 definitions to be independent of any particular serialization, including XML. For easy use with XML, WSDL 2.0 also defines an XML Infoset representation for each component, and provides mapping rules from the XML representation to the component. The following diagram summarizes the XML representation of the WSDL 2.0 component model. One can easily identify the similarities with WSDL 1.1. An interface still contains operations which still have input and/or output. Binding still provides mirroring constructs for interface. Service still provides the concrete address.

Some differences from WSDL 1.1 can be noticed:

the message construct have been removed. These are specified using the XML schema type system in the types element. definitions have been renamed to descriptions portType has been renamed to interface. Support for interface inheritance is achieved by using the extends attribute in the interface element. a new construct include has been added.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 135 of 191

ports have been renamed to endpoints

The WSDL is a widely adopted standard. The main reason for its success is that it adopts the platform independent XML schema specification as its type system for describing data types and message formats. Furthermore it is one of the most mature standards concerning web services and the W3C is continuously working on the specification by refining and advancing it.

References

1. Web Services Description Language (WSDL) Version 2.0 Part 1: Core Language, http://www.w3.org/TR/wsdl20/ 2. Web Services Description Language (WSDL) 1.1, http://www.w3.org/TR/wsdl

Other Links

WSDL 1.1 Specification WSDL 2.0 Specification Part 0: Primer (Latest Version) WSDL 2.0 Specification Part 1: Core (Latest Version) WSDL 2.0 Specification Part 2: Adjuncts (Latest Version) http://webservices.xml.com/lpt/a/ws/2004/05/19/wsdl2.html https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/74bae690- 0201-0010-71a5-9da49f4a53e2

4.2.80 Web Services Distributed Management (WSDM). OASIS WSDM[1] is a Web services specification managed by the WSDM technical committee [2] of the OASIS consortium. A great variety of management systems already co-exist to be able to manage the breadth of resources. WSDM is a standard created in OASIS for managing and monitoring the status of other services, solving the management integration problem of Web Services Distributed Management. WSDM uses Web services as a platform to provide essential distributed computing functionality, interoperability, loose coupling, and implementation independence. The following is a list with some of the key design goals for WSDM:

Resource orientation: WSDM was representing access to manageable resource interfaces directly as Web services. Agent architecture agnostic: Because Web services easily supports proxy and redirection architectures it is possible to represent and offer the resources as Web services rather than through interfaces on the traditional agent. It might be true that the resource is accessed through an agent or other proxy. It might also be true that the resource, like an application server, supports its manageable resource directly. This implementation choice is not reflected in the interface to the resource or how the manager interacts with it. NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 136 of 191

Composability: Since resources vary dramatically in their capacity and management features, it is important that management capabilities as well as Web services qualities of service, like security, only need to be supported when absolutely necessary. Composability allows scaling down into small devices as well as up to enterprise class environments. Design time and run-time inspection: Management systems must do discovery of environments and resources during run-time, which is usually after they are introduced into existing systems, and on an ongoing basis in order to maintain an accurate understanding of ever-changing environments. Management systems also need to be able to inspect and integrate new resources without having to interact with an instance of the resource. For example, in installation, deployment, and just- in-time resource activation scenarios, the resource might not exist when a manager starts to interact with it. Model agnostic: WSDM does not define what information resources should provide in their management interfaces; this is what a resource model does. WSDM compliments the model by defining how to express that model in XML schema and access the model using Web services.

WSDM includes two specifications:

Management Using Web Services (MUWS) defines how to represent and access the manageability interfaces of resources as Web services. It defines a basic set of manageability capabilities (A manageability capability is a composable set of properties, operations, events, metadata, and other semantics that supports a particular management task. It captures concepts common in most resource information models. Manageability capabilities identify a "contract" that a manageable resource asserts it can offer to clients.), such as resource identity, metrics, configuration, and relationships, which can be composed to express the capability of the management instrumentation. WSDM MUWS also provides a standard management event format to improve interoperability and correlation.

Management Of Web Services (MOWS): defines how to manage Web services as resources and how to describe and access that manageability using MUWS. MOWS provides mechanisms and methodologies that enable manageable Web services applications to interoperate across enterprise and organizational boundaries.

The following figure shows how MUWS composability is used to combine qualities of service from the Web services platform with manageability capabilities and resource- specific manageability capabilities. Resource management models help define the contents of resource-specific capabilities.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 137 of 191

The next figure shows how a manageable Web printer service could be composed, according to the WSDM MOWS specification (MOWS). The functional interface for the printer is a simple Print operation defined and accessed as a WSDL operation. The Resource management interface, which manages the printer device itself, offers two properties -- PrintedPageCount and AvailableTonerCapacity -- and an Enable operation. The properties are advertised in the Resource Properties Document and are accessible through the WS-RF GetProperties operation. Finally the Manageability Capability for managing the printer Web service offers the MOWS metrics, NumberOfRequests, and the additional operational status control operations: Start and Stop.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 138 of 191

WSDM provides significant value to three major groups:

Customers with heterogeneous IT environments: WSDM allows management software from different vendors to interoperate more easily, enabling end-to-end and even cross-enterprise management. ISVs producing management software: WSDM provides standards for identifying, inspecting, and modifying characteristics of resources in the IT environment. Management applications can take advantage of these to deliver functionality and increase the number and type of resources that management software can address. Over time this will reduce the cost of such applications and broaden their potential function. Manufacturers of devices: WSDM provides the ability to expose management interfaces using Web services in a standard way, regardless of how the internal instrumentation is done. Any management vendor can use these Web services interfaces, reducing the amount of custom support required.

The WSDM specifications depend on the WS-I Basic Profile (BP) plus other Web services foundation specifications being standardized in OASIS:

WS-Resource Framework (WS-RF) Resource Properties (WSRP) for properties WS-Notification (WSN) Base Notifications (WSBN) for management event transport WS-Addressing (WSA) for service references

There is an open source implementation of the WSDM called MUSE[3].

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 139 of 191

References

1. Specification v.1.1, http://www.oasis- open.org/committees/download.php/20571/wsdm-1.1-os-01.zip 2. OASIS Web Services Distributed Management (WSDM) Technical Committee, http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wsdm#overview 3. MUSE, http://ws.apache.org/muse/

4.2.81 Web Service Interoperability (WS-I) Basic Profile. WS-I WS-I profiles specify constraints on how to use SOAP, HTTP, and WSDL. Here, we discuss the profiles dealing with functional interoperability. Note that many other WS-I standards are dealing with security, namely Basic Security Profile, Reliable Secure Profile, Kerberos Token Profile, REL Token Profile, SAML Token Profile. Functional interoperability is covered by WS-I Basic Profile 1.1.

WS-I Basic Profile 1.1 to be used with WS-I Simple Soap Binding Profile 1.0.

WS-I Basic Profile 1.2 (working draft) to be used with WS-I Attachments Profile 1.0. WS-I Attachments Profile adds support for conveying interoperable SOAP Messages with Attachments-based attachments with SOAP messages.

WS-I has gained a lot of practical relevance and is well suppported.

References 1. WS-I, http://www.ws-i.org/

4.2.82 Web Services Level Agreement (WS-LA). IBM WS-LA[1] is a specification proposed by IBM. WS-LA is focused on providing a framework and a technology for monitoring and evaluation of Service Level Agreements. Its core language schema provides the means to define Quality of Service statements (it can be regarded as the basis for electronic contracts). As opposed to WS-Agreement, it doesn't provides means for publication and negotiation of contracts though. The initial WS-LA specification is incomplete and partially contradictory. However, WS-LA is still the only specification that tries to cover an actual agreement specification in detail. It allows definition of metrics, conditions and terms, even though the approach towards defining metrics must be considered restrictive as it does not make use of XPath. There are some implementations that successfully make use of the WS-LA specification, trying to cover up for these deficiencies. Currently, an obvious choice of technology would want to cover parts of both specifications and there is an ongoing effort trying to realise this. As long as WS-Agreement cannot fully satisfies requirements related to electronic contracts, uptake of WS-LA should be considered for defining SLAs should be considered.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 140 of 191

References

1. Home, http://www.research.ibm.com/wsla/

4.2.83 Web Services Modeling Ontology (WSMO) - Web Services Modeling Language (WSML). ESSI WSMO Working Group Web Service Modeling Ontology (WSMO) is an ontology for semantically describing Semantic Web Services. It is a model for the description of semantic web services that tries to overcome the limit of the existing technologies for the service description, in particular OWL-S. Web Service Modeling Language (WSML) is a language that formalizes the WSMO. It uses well-known logical formalisms, namely, Description Logics, First-Order Logic and Logic Programming, in order to enable the description of various aspects related to Semantic Web Services. It consists of a number of language variants with different underlying logic formalisms. They have both been developed in the context of WSMO Working Group [1], as part of the SDK cluster, an initiative of the European Community to align research and development efforts in the areas of Semantic Web Services between the SEKT[2], DIP[3] Knowledge Web[4], and ASG[5] research projects. The WSMO Working Group includes the WSML Working Group [6], which aims at developing WSML and the Web Service Modelling eXecution environment (WSMX) Working Group [7], which aims at providing an execution environment and a reference implementation for WSMO. The conceptual grounding of WSMO is based on the Web Service Modeling Framework (WSMF) [8], wherein four main components are defined.

Ontologies provide machine-readable semantics for the information used by all actors implied in the process of Web Services usage, either providers or consumers, allowing interoperability and information interchange among components.

Goals specify objectives that a client may have when consulting a Web Service. A goal in WSMO is described by non-functional properties, post-conditions, and

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 141 of 191

effects. Non-functional properties specify information that do not affect the functionality of the element, including for example quality-related attributes. Post- conditions define the state of the desired information space. Effects describe the desired state of the world after the execution of the Web Service.

Web Services represent the functional part that must be semantically described in order to allow its (semi-)automated use. In a WSMO specification, Web Services are described by means of non-functional properties, imported ontologies, used mediators, capability and interfaces. A service can be described by multiple interfaces, but has one and only one capability.

Mediators aim to overcome structural, semantic or conceptual mismatches that appear between different components that build the WSMO specification. Mediators are used as connectors to provide interoperability facilities among the rest of components. Mediation within Semantic Web Services can be done at different levels: data level is mediation between heterogeneous data sources; protocol level is mediation between heterogeneous communication patterns; process level is mediation between heterogeneous business processes.

WSMO is also working on the definition of a set of use cases in order to exemplify WSMO usage for specific real-life purposes. The different use cases provide valuable insight for testing and adapting the Modeling constructs provided in WSMO in real-world scenarios for Web Services. So, besides demonstrating how to model Web Services in WSMO, the use cases also allow demonstration of the adequacy of the WSMO approach in terms of providing an exhaustive framework for covering all relevant aspects of semantic description of Web Services. OASIS has created a technical committee called OASIS Semantic Execution Environment (SEE) TC [9] with to provide guidelines, justifications and implementation directions for an execution environment for Semantic Web services. The OASIS SEE TC works closely with WSMX group since most of its initiatives use WSMO,WSML and WSMX.

References

1. Web Service Modeling Ontology, http://www.wsmo.org/ 2. http://sekt.semanticweb.org/ 3. http://dip.semanticweb.org/ 4. http://knowledgeweb.semanticweb.org/ 5. http://asg-platform.org/ 6. Web Service Modeling Language, http://www.wsmo.org/wsml 7. Web Service Modelling eXecution environment, http://www.wsmx.org/ 8. Web Service Modeling Framework,http://www.w3.org/2005/04/FSWS/Submissions/1/wsmo_position_paper .html#fensel 9. OASIS SEE TC, www.oasis-open.org/committees/semantic-ex/

Other Links NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 142 of 191

http://www.wsmo.org/TR/d2/v1.3/ http://www.wsmo.org/TR/d16/d16.1/v0.21/

4.2.84 Web Service Semantics (WSDL-S). W3C The Web Services Semantics - WSDL-S [1] specification is a W3C Member Submission that defines how to add semantic information to WSDL documents. Semantic annotations define the meaning of the inputs, outputs, preconditions and effects of the operations described in a service interface. These annotations reference concepts in an ontology. Semantic annotations are used to automate service discovery, composition, mediation, and monitoring. It is conceptually based on, but a significant refinement in details of, the original WSDL-S proposal [2] from the LSDIS laboratory at the University of Georgia. The World Wide Web Consortium (W3C) Web services architecture [3] defines two aspects of the full description of a Web service. The first is the syntactic functional description as represented by WSDL. The second is described as the semantics of the service and is not covered by a specification. In practice, the semantic description is either missing or informally documented. By examining the WSDL description of a service, what the service does cannot be unambiguously determined. WSDL-S tries to overcome this gap, adding new extensibility elements' to the WSDL standards to annotate the semantic of web services. Each service description referes one or more Semantic Model. A semantic model captures the terms and concepts used to describe and represent an area of knowledge or some part of the world, including a software system. A semantic model usually includes concepts in the domain of interest, relationships among them, their properties, and their values. During development, the service provider can explicate the intended semantics by annotating the appropriate parts of the Web service with concepts from a richer semantic model or expressions composed of such concepts. These expressions or references to concepts are called semantic annotations. A semantic annotation is additional information in a document that defines the semantics of a part of that document. In WSDL-S, the semantic annotations are additional information elements in a WSDL document. They define the meaning of elements in WSDL document by referring to a part of a semantic model. There are a number of potential languages for representing semantics, each language offers different levels of semantic expressivity and developer support. WSDL-S position is that it is not necessary to tie the Web services standards to a particular semantic representation language. WSDL-S provides mechanisms to annotate the service and its inputs, outputs and operations. Additionally, it provides mechanisms to specify and annotate preconditions and effects of Web Services. These preconditions and effects together with the semantic annotations of inputs and outputs can enable automation of the process of service discovery. The advantage of this evolutionary approach to adding semantics to WSDL is multi-fold. First, users can, in an upwardly compatible way, describe both the semantics and operation level details in WSDL- a language that the developer community is familiar with. Second, by externalizing the semantic domain models, we take an agnostic approach to

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 143 of 191

ontology representation languages. This allows Web service developers to annotate their Web services with their choice of modeling language (such as UML or OWL).

References

1. Web Service Semantics - WSDL-S ,http://www.w3.org/Submission/WSDL-S/ 2. WSDL-S: Adding semantics to WSDL - White paper,http://lsdis.cs.uga.edu/library/download/wsdl-s.pdf 3. W3C WSA,http://www.w3.org/Submission/WSDL-S/#W3CWSA

4.2.85 Widgets specs. W3C This document defines a Zip-based packaging format and an XML-based configuration document format for widgets. The configuration document is a simple XML-based language that authors can use to record metadata and configuration parameters about a widget. The packaging format is a container for files required by a widget. Widgets[1] are a class of client-side web application for displaying and updating local or remote data, packaged in a way to allow a single download and installation on a client machine or device. Widgets typically run as standalone applications outside of a web browser, but it is possible to embed them into web pages. Examples range from simple clocks, stock tickers, news casters, games and weather forecasters, to complex applications that pull data from multiple sources to be "mashed-up" and presented to a user in some interesting and useful way For widgets, the W3C's Widgets specs defines:

a [ZIP]-based format used to package the files that constitute a widget, as well as how those packages are to be processed. An XML-based vocabulary to create a configuration document (used to configure a widget at runtime), as well as the rules for how to parse a configuration document and defaults when a configuration document is unavailable. Rules that allow a widget user agent to locate and launch the start file of a widget resource. An auto-discovery mechanism to allow HTML user agents to "discover" a widget resource from within a HTML document. An internationalization model, which automatically selects the appropriate content to display based on the end-user's locale.

References

1. Widgets 1.0: Packaging and Configuration, http://www.w3.org/TR/2008/WD- widgets-20080414

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 144 of 191

4.2.86 Windows Management Instrumentation. Microsoft Windows Management Instrumentation (WMI)[1] is a set of extensions to the Windows Driver Model that provides an operating system interface through which instrumented components provide information and notification. WMI is Microsoft's implementation of the Web-Based Enterprise Management (WBEM) and Common Information Model (CIM) standards from the Distributed Management Task Force (DMTF). The purpose of WMI is to define a non-proprietary set of environment-independent specifications which allow management information to be shared between management applications. WMI prescribes enterprise management standards and related technologies that work with existing management standards, such as Desktop Management Interface (DMI) and SNMP. WMI complements these other standards by providing a uniform model. This model represents the managed environment through which management data from any source can be accessed in a common way. As part of the installation process, most of the Microsoft applications available today (e.g. SQL Server, Exchange Server, Microsoft Office, Internet Explorer, Host Integration Server, Automated Deployment Services) extend the standard CIM object model to add the representation of their manageable entities in the CIM repository. This representation is called a WMI class, and it exposes information through properties and allows the execution of some actions via methods. The access to the manageable entities is made via a software component, called a ―provider‖ which is simply a DLL implementing a COM object written in C/C++. Because a provider is designed to access some specific management information, the CIM repository is also logically divided into several areas called namespaces. Each namespace contains a set of providers with their related classes specific to a management area (i.e. RootDirectoryDAP for Active Directory, RootSNMP for SNMP information or RootMicrosoftIISv2 for Internet Information Services information). To locate the huge amount of management information available from the CIM repository, WMI comes with a SQL-like language called the WMI Query Language (WQL).

References

1. Home, http://www.microsoft.com/whdc/system/pnppwr/wmi/default.mspx

4.2.87 Workflow XML (Wf-XML). WfMC Wf-XML is a BPM standard developed by the Workflow Management Coalition, as an extension of the OASIS Asynchronous Service Access Protocol (ASAP) to manage and monitor the life cycle of long lasting processes. ASAP monitoring abilities detect changes in the process execution status. Wf-XML extends ASAP by offering additional WS operations to exchange process execution definitions. Wf-XML also permits invocations between different BPM processes executed in different BPM engines. Current Wf-XML 2.0 version supports SOAP messages exchanges and conforms with WSDL and other WS* standards. References

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 145 of 191

1. Wf-XML 2.0 Current draft, http://www.wfmc.org/standards/docs/WfXML20- 200410c.pdf 2. Wf-XML 2.0 XSD, http://www.wfmc.org/standards/docs/wfxml20.xsd

Other links: Workflow Management Coalition: http://www.wfmc.org/ OASIS ASAP: http://www.oasis- open.org/committees/tc_home.php?wg_abbrev=asap

4.2.88 WS Addressing. W3C WS-Addressing provides transport-neutral mechanisms that allow web services to communicate addressing information. WS-Addressing is a standardized way of including the HTTP-specific data in the XML message itself. WS Addressing consists of three parts. WS Addressing Core 1.0 contains the following two specifications:

a structure for communicating a reference to a Web service endpoint (Endpoint Reference) a set of Message Addressing Properties which associate addressing information with a particular message. Message Addressing Properties communicate addressing information relating to the delivery of a message to a Web service. The main properties are message destination, source endpoint, reply endpoint, fault endpoint, unique message ID. They can be used for routing a message to the right endpoint and in order to uniquely identify messages for various purposes.

WS Addressing SOAP-Binding 1.0 defines the binding of the abstract properties defined in Web Services Addressing 1.0 - Core to SOAP Messages. WS Addressing WSDL Binding 1.0/Web Services Addressing 1.0 Metadata defines how the abstract properties defined in Web Services Addressing 1.0 - Core are described using WSDL, how to include WSDL metadata in endpoint references, and how WS-Policy can be used to indicate the support of WS-Addressing by a Web service.

References 1. W3C WS-Addressing, http://www.w3.org/Submission/2004/05/

4.2.89 WS Agreement. OGF/GGF The WS-Agreement specification[1][2] belongs to the Open Grid Forum (OGF) (Previously known as Global Grid Forum (GGF)). The quality of service (QoS) and other guarantees that depend on actual resource usage must be obtained through state-dependent guarantees from the service provider, represented as an agreement on the service and the associated guarantees. An

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 146 of 191

agreement between a service consumer and a service provider specifies one or more service level objectives both as expressions of requirements of the service consumer and assurances by the service provider on the availability of resources and/or on service qualities. For example, an agreement may provide assurances on the bounds of service response time and service availability. Alternatively, it may provide assurances on the availability of minimum resources such as memory, CPU MIPS, storage... In addition, the QoS should be monitored and service consumers may be notified of failure to meet these guarantees. The objective of the WS-Agreement specification is to define a language and a protocol for advertising the capabilities of service providers and creating agreements based on creational offers, and for monitoring agreement compliance at runtime. The specification consists of three parts which may be used in a composable manner:

1. An XML-based schema for specifying the nature of the agreement 2. An XML-based schema for specifying an agreement template to facilitate discovery of compatible agreement parties 3. A set of port types and operations for managing agreement life-cycle, including creation, expiration, and monitoring of agreement states

The creation of an agreement can be initiated by the service consumer side or by the service provider side, and the protocol provides hooks enabling such symmetry. An agreement includes information on the agreement parties and a set of terms. The terms MAY comprise one or more service terms and zero or more guarantee terms specifying service level objectives and business values associated with these objectives. The agreement creation process typically starts with a pre-defined agreement template specifying customizable aspects of the documents, and rules that must be followed in creating an agreement, which we call agreement creation constraints.

References

1. Specification, http://www.ggf.org/Public_Comment_Docs/Documents/Oct-2005/WS- AgreementSpecificationDraft050920.pdf 2. Specification (Current), http://www.ogf.org/documents/GFD.107.pdf

4.2.90 WS Composite Application Framework 1.0 (WS-CAF). OASIS It is a standard that was included in the OASIS consortium. The purpose of the OASIS WS-CAF was to define a generic and open framework for applications that contain multiple services used in combination (composite applications). The WS-CAF includes three specifications that can be implemented incrementally to address the range of requirements needed to support a variety of simple to complex composite applications: WS Context (WS-CTX), WS Coordination Framework (WS-CF) and WS Transactions (WS-TXM).

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 147 of 191

The overall aim of the combination of the parts of WS-CAF is to provide a complete solution that supports various transaction processing models and architectures. WS-CAF specifications are designed to compliment Web services orchestration and choreography technologies such as WS-BPEL and WSCI and are compatible with other Web services specifications. 4.2.90.1 WS Context (WS-CTX) The ability to model and compose arbitrary units of work (activities) by means of contexts is a requirement in a variety of aspects of distributed applications such as workflow and business-to-business interactions. WS-Context provides a mechanism for Web services to share persistent state, which is required to support conversational interactions, single sign-on, transaction coordination, and other features dependent upon system-level data items such as IDs, tokens etc. Context provides a way to correlate a set of messages into a larger unit of work by sharing common information such as a security token exchanged within a single sign on session. Because distributed computing systems depend upon a variety of IDs, tokens, channels, and addresses, which are a part of every software infrastructure, and because Web services are independent of any particular execution environment, this type of system level information needs to be organized and managed in a persistent, shared context structure. Applications need a service to manage the lifecycle of the shared context, and to ensure the context structure is kept up to date and accessible. WS-Context defines a context data structure that can be arbitrarily augmented. By default, all the context defines is a unique context identifier, the type of the context (e.g., transaction or security) and a timeout value (how long the context can remain valid). 4.2.90.2 WS Coordination Framework (WS-CF) WS-CF extends WS-CTX by defining the coordinator role that is in charge of submitting notification messages to Web services registered in a particular context. It provides operations to register/unregister participants in contexts. The coordinator is notified when a particular context is created, updated or terminated in order to trigger the corresponding coordination actions. The fundamental idea underpinning WS-CF is recognition of a shared and generic need for propagating context information in the Web services environment, independently of the applications involved. The WS-CF specification defines a framework that allows different coordination protocols to be plugged-in to coordinate the work between clients, services and participants. These coordination protocols are defined in the WS-TXM specification. 4.2.90.3 WS Transaction Management (WS-TXM) WS-TXM provides the specific protocols for interoperability across multiple transaction managers. It defines three different transaction coordination protocols: ACID transactions, long running activities (LRAs) and business process transactions (BP). These transaction coordination protocols aim to agree on a common transactional outcome among the transaction participants.

ACID transactions: It is the basic transaction model and defines a two-phase commit (2PC) coordination protocol. Although ACID transactions may not be suitable for all Web Services, they are most definitely suitable for some, and NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 148 of 191

particularly high-value interactions such as those involved in finance. As a result, the ACID transaction model defined in WS-TXM has been designed with interoperability in mind. In the ACID model, each activity is bound to the scope of a transaction, such that the end of an activity automatically triggers the termination (commit or rollback) of the associated transaction. LRA transactions: It is designed for business interactions that can last long periods of time. Within this model, an activity reflects business interactions: all work performed within the scope of an application is required to be compensatable. Therefore, an application‘s work is either performed successfully or undone. How individual Web services perform their work and ensure it can be undone if compensation is required, are implementation choices and not exposed to the LRA model. The LRA model simply defines the triggers for compensation actions and the conditions under which those triggers are executed. For example, when a user reserves a seat on a flight, the airline reservation centre may take an optimistic approach and actually book the seat and debit the users account, relying on the fact that most of their customers who reserve seats later book them; the compensation action for this activity would obviously be to un-book the seat and credit the user‘s account. Work performed within the scope of a nested LRA must remain compensatable until an enclosing service informs the individual service(s) that it is no longer required. BP transactions: In the business process transaction model (BP model) all parties involved in a business process reside within business domains, which may themselves use business processes to perform work. Business process transactions are responsible for model and managing the complex interactions between these domains. This is useful when it is necessary to glue together disparate services and domains, some of which may not be using the same transaction implementation behind the service boundary. For example, consider the purchasing of a home entertainment system example shown in the following figure. The work necessary to obtain each component is modeled as a separate task, or Web service. In this example, the HiFi task is actually composed of two sub-tasks. In this example, the user may interact synchronously with the shop to build up the entertainment system. Alternatively, the user may submit an order (possibly with a list of alternate requirements) to the shop which will eventually call back when it has been filled; likewise, the shop then submits orders to each supplier, requiring them to call back when each component is available (or is known to be unavailable).

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 149 of 191

References

1. OASIS WS-CAF (Composite Application Framework), http://www.oasis- open.org/committees/tc_home.php?wg_abbrev=ws-caf 2. WS-CTX 1.0 Specification, http://docs.oasis-open.org/ws-caf/ws- context/v1.0/OS/wsctx.html 3. WS-CF 1.0 Specification, http://www.oasis- open.org/committees/download.php/15042/WS-CF.zip 4. WS-TXM 1.0 ACID Specification, http://www.oasis- open.org/committees/download.php/19474/WS-ACID.zip 5. WS-TXM 1.0 LRA Specification http://www.oasis- open.org/committees/download.php/19473/WS-LRA.zip 6. WS-TXM 1.0 BP Specification http://www.oasis- open.org/committees/download.php/19475/WS-BP.zip

Other links

JASS WS-CAF: An open-source implementation of the WS-CAF, http://forge.objectweb.org/projects/jass/

Notes

WS-CAF was superseded by WS-TX. Despite WS-CAF is deprecated, WS-CTX is still a valid OASIS standard. This allows to define standard contexts for those Web Services-based applications that need them.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 150 of 191

4.2.91 WS Conversation Language WS-CL. W3C The Web Services Conversation Language (WS-CL) is a simple language for defining abstract interfaces of Web services. WSCL specifies the XML documents being exchanged and the related proper exchange sequence. WS-CL conversation definitions are themselves XML documents; for this reason they can be understood by Web services infrastructures and development tools. WS-CL is specialized on modelling the sequence of the interactions or operations of one interface and fills the gap between simple interface definition languages (that do not specify any choreography) and complex process or flow languages (that describe complex global multi-party conversations and processes). WS-CL main goal is "to define the minimal set of concepts necessary to specify conversations", so it results in a light-weight interface specification language. Therefore related implementations are very simple, but at the same time expressiveness of WS-CL specifications is quite reduced. WS-CL is specifically targeted at public workflow types. WS-CL has been developed by HP, derived from the Conversation Definition Language (CDL) of its now abandoned E-Speak framework.

References

1. Web Services Conversation Language (WSCL) 1.0, http://www.w3.org/TR/wscl10/

4.2.92 WS Enumeration. W3C It describes a general SOAP-based protocol for enumerating a sequence of XML elements that is suitable for traversing logs, message queues, or other linear information models. This specification defines a simple SOAP-based protocol for enumeration that allows the data source to provide a session abstraction, called an enumeration context, to a consumer that represents a logical cursor through a sequence of data items. The consumer can then request XML element information items using this enumeration context over the span of one or more SOAP messages. Somewhere, state must be maintained regarding the progress of the iteration. This state may be maintained between requests by the data source being enumerated or by the data consumer. WS-Enumeration allows the data source to decide, on a request-by-request basis, which party will be responsible for maintaining this state for the next request. This specification is a W3C submission. Adoption in practice was announced but still seems to be in progress.

References 1. W3C WS Enumeration, http://www.w3.org/Submission/WS-Enumeration/

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 151 of 191

4.2.93 WS Federation. OASIS OASIS' WS-Federation[1] is a set of patterns and example scenarios that describe advanced federation patterns. How in case of WS-Security and WS-Trust, it may be used in these situations. This specification can be regarded as a source book of ideas that may be used or adapted as necessary by specific nedds. The main risk (probably the one) is that WS-Federation is not so mature as WS-Security is. In any case, it is quite mature, well-adopted and supported by many large players.

References

1. Home, http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wsfed

4.2.94 WS Human Task (WS-HT). SAP WS-HT is included as part of the WS-BPEL Extension for People (BPEL4People) Technical Committee, but it is a complete specification. The purpose of the BPEL4People TC is to define (1) extensions to the OASIS WS-BPEL 2.0 Standard to enable human interactions and (2) a model of human interactions that are service-enabled. This is carried out through the BPEL4People and WS-HumanTask specifications (See figure below).

WS-HT defines interfaces to allow introducing people tasks as services in an SOA independently of WS-BPEL. The idea is that non-human services will have a consistent and standard way to interact with humans. By incorporating human tasks in composite applications, project teams‘ flexibility and efficiency can be improved in delivering standards-based solutions that incorporate people, process and services into a single application.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 152 of 191

Human tasks allow the integration of humans in service-oriented applications. A human tasks is defined as a service ―implemented‖ by people. This means that a human task has people assigned to it. They can have special properties such as timeouts that trigger concrete actions, and notifications to send information about noteworthy business events to people. A human task has two interfaces:

Interface that exposes the service offered by the task, like a translation service or an approval service. Interface to allows people to deal with tasks (e.g. to query for human tasks waiting for them, and to work on these tasks)

The goal of this specification is to enable:

Portability: The ability to take human tasks and notifications created in one vendor's environment and use them in another vendor's environment. Interoperability: The capability for multiple components (task infrastructure, task list clients and applications or processes with human interactions) to interact using well- defined messages and protocols. This enables combining components from different vendors allowing seamless execution.

The following is a brief example description that tries to expose why WS-HT is necessary and how it works. Information about human tasks or notifications needs to be made available in a human- readable way to allow users dealing with their tasks and notifications via a user interface. These notifications can be sent for example to a user's inbox. By using WS-HT, developers will be able to create inboxes with a standard API that can be accessed by a BPEL4People task. The WS-HT standard is needed because currently vendors create APIs for inboxes in a proprietary way, so it is difficult to integrate tasks from systems based on different vendor platforms even though they follow the WS-BPEL standard. Integration will be facilitated by having all the vendors and developers working in the BPEL space use standard APIs. When activity related to people appears in a process, BPEL4People is going to create a task and it is going to associate that task making a WS-HT API call into an inbox. Then, WS-HT provides the runtime infrastructure for a work queue. It provides a standard definition of what a task looks like, a standard way for how a task gets created and a standard set of APIs for how it interacts. References

1. BPEL4People Technical Committee: Web Services for Human Task (WS- HumanTask), http://www.oasis- open.org/committees/tc_home.php?wg_abbrev=bpel4people 2. Web Services for Human Task (WS-HumanTask) specification v1.0, https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/a0c9ce4c-ee02- 2a10-4b96-cb205464aa02

Other Links

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 153 of 191

ActiveBPEL: An open-source BPEL implementation including WS-HT, http://www.activevos.com/community-open-source.php

4.2.95 WS Management. DMTF WS-Management [1] is an specification driven by the Distributed Management Task Force (DMTF). WS-Management provides a common way for systems to access and exchange management information across an IT infrastructure. By using Web services to manage IT systems, deployments that support WS-Management will enable IT managers to remotely access devices on their networks - everything from silicon components and handheld devices to PCs, servers and large-scale data centers. WS-Management is the first specification in support of the DMTF initiative to expose CIM resources via a set of Web services protocols. The WS-Management specification promotes interoperability between management applications and managed resources. By identifying a core set of Web service specifications and usage requirements to expose a common set of operations that are central to all systems management, WS-Management has the ability to:

DISCOVER the presence of management resources and navigate between them GET, PUT, CREATE, and DELETE individual management resources, such as settings and dynamic values ENUMERATE the contents of containers and collections, such as large tables and logs SUBSCRIBE to events emitted by managed resources EXECUTE specific management methods with strongly typed input and output parameters

The following ones are open source implementations of the specification: Openwsman [2], Wiseman [3] and SOA4D [4]

References

1. Specification, http://www.dmtf.org/standards/wbem/wsman 2. Openwsman, http://www.openwsman.org/ 3. Wiseman, https://wiseman.dev.java.net/ 4. SOA4D, http://www.soa4d.org/

4.2.96 WS Metadata Exchange. BEA System / IBM / Microsoft / SAP Web services use metadata to describe what other endpoints need to know to interact with them. To bootstrap communication with a Web service, this specification defines three request-response message pairs to retrieve these three types of metadata. Together these messages allow efficient, incremental retrieval of a Web service's metadata: This specification defines how to retrieve the following types of metadata:

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 154 of 191

policy data, i.e. the WS-Policy associated with the receiving endpoint or with a given target namespace WSDL associated with the receiving endpoint or with a given target namespace, XML schema associated with a given target namespace.

In response to the request, this specification defines an encapsulation that contains the three different ways the metadata may be returned. First, the metadata itself may be simply included in the response. Second, a URI may be returned, to which an HTTP GET can then be sent to retrieve the metadata from that location. And third, a WS-Addressing Endpoint Reference of a WS-Transfer Metadata Resource may be returned, to which a WS-Transfer Get may be issued to retrieve the metadata. This specification also defines how a WS-Addressing Endpoint Reference can be modified to include this encapsulation. This is not yet a standard. The problem is that it relies on WS Transfer whose status is a W3C submission.

References 1. WS Metadata Exchange, http://download.boulder.ibm.com/ibmdl/pub/software/dw/specs/ws- mex/metadataexchange.pdf

4.2.97 WS Notification, WS Eventing, WS EventNotification. OASIS, W3C The topic of adding event mechanisms such as publish/subcribe to Web services still sees competing standards. OASIS WS-Notification 1.3 defines a pattern-based approach for disseminating information amongst Web services. It provides a standardized way for one Web service (or other entity) to disseminate information to another set of Web services, without having to have prior knowledge of those services. It adopts the publish/subscribe pattern from event- driven architectures. The standard consists of the following parts.

WS Base Notification defines standard message exchanges that allow one service to register or de-register with another, and to receive notification messages from that service.

WS Brokered Notification builds on WS-BaseNotification to define the message exchanges to be implemented by an intermediary "Notification Broker."

WS Topics provides an XML model to organize and categorize classes of events into "Topics," enabling users of WS-BaseNotification or WS-BrokeredNotification to specify the types of events in which they are interested.

WS-Eventing is standard submitted to W3C that describes a protocol that allows Web services to subscribe to or accept subscriptions for event notification messages. It is similar to WS Base Notification.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 155 of 191

WS-EventNotification is an initiative that is to harmonize the previous standards, that has not yet become a standard. It is expected that it replaces WS Eventing but not completely WS Notification.

References 1. OASIS WS Notification committee, http://www.oasis-open.org/committees/wsn/ 2. W3C WS Eventing, http://www.w3.org/Submission/WS-Eventing/

4.2.98 WS Policy. W3C WS-Policy [1] is an specification driven by W3C. WS-Policy provides a flexible and extensible machine-readable language for expressing the capabilities, requirements, and general characteristics of Web Services. WS-Policy defines framework and a model for the expression of these properties in each domain in the form of policies. A domain in this context is a generic field of interest that applies to the service, such as the following:

Security Privacy Application priorities User account priorities Traffic control ...

Web service developers use policy-aware clients that understand policy expressions and engage the behaviors represented by providers automatically. These behaviors may include security, reliability, transaction, message optimization, etc. WS-Policy is a simple language that hides complexity from developers, automates Web service interactions, and enables secure, reliable and transacted Web Services. The language is very simple and only has four elements to define the policies for a Web Service: Policy, All, ExactlyOne and PolicyReference, and one attribute: wsp:Optional. WS-Policy defines a policy to be a collection of policy alternatives, where each policy alternative is a collection of policy assertions. Some policy assertions specify traditional requirements and capabilities that will ultimately manifest on the wire (e.g., authentication scheme, transport protocol selection). Other policy assertions have no wire manifestation yet are critical to proper service selection and usage (e.g., privacy policy, QoS characteristics). WS-Policy provides a single policy grammar to allow both kinds of assertions to be reasoned about in a consistent manner. The figure below describes the policy data model:

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 156 of 191

A policy-aware client uses a policy to determine whether one of these policy alternatives (e.g. the conditions for an interaction) can be met in order to interact with the associated Web Service. Such clients may choose any of these policy alternatives and must choose exactly one of them for a successful Web service interaction. Clients may choose a different policy alternative for a subsequent interaction. It is important to understand that a policy is a useful piece of metadata in machine-readable form that enables tooling, yet is not required for a successful Web service interaction.

4.2.99 WS Policy Framework and Closely Related Standards. W3C WS-Policy defines a framework and a model (a grammar) for expressing properties (capabilities, requirements, constraints) of entities in an XML Web services-based system, typically Web Services. Policies may be used for both providers and consumers.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 157 of 191

Scope: Policy properties either become manifest on the wire (for example, authentication scheme, transport protocol selection) or properties apply to service selection and usage (for example, privacy policy, QoS characteristics). More specific standards for domain specific policies have been developed, such as WS- SecurityPolicy, WS-ReliableMessaging and WS-ReliableMessagingPolicy, WS- SecureConversation, WS-Security, WS-Transactions, WS-Adressing or WS-Trust.

Structure: WS-Policy defines a policy to be a collection of one or more policy assertions. WS-PolicyAssertion defines the Web Services Policy Assertions Language.

Integration: WS-Policy does not specifying how policies are discovered or attached to a Web service. WS-PolicyAttachments defines methods for associating the WS-Policy expressions with Web Services via WSDL or UDDI.

Status: WS-Policy is a W3C Recommendation since 2007.

References

1. W3C Web Services Policy 1.5 Framework, http://www.w3.org/TR/ws-policy

4.2.100 WS Reliable Messaging 1.1. OASIS SOAP over HTTP is not sufficient when an application-level messaging protocol must also guarantee some level of reliability and security. The used infrastructure may be unreliable. WS-ReliableMessaging is defined as SOAP header extensions and is independent of the underlying protocol. This standard has integrated previous efforts from WS- ReliableMessaging 1.0 and OASIS WS Reliability 1.1. There is potential overlap with ebXML Message Service Specification 2.0. This standards aims at supporting the following characteristics.

AtMostOnce Messages will be delivered at most once without duplication or an error will be raised on at least one endpoint. It is possible that some messages in a sequence may not be delivered.

AtLeastOnce Every message sent will be delivered or an error will be raised on at least one endpoint. Some messages may be delivered more than once.

ExactlyOnce Every message sent will be delivered without duplication or an error will be raised on at least one endpoint. This delivery assurance is the logical "and" of the two prior delivery assurances.

InOrder Messages will be delivered in the order that they were sent. This delivery assurance may be combined with any of the above delivery assurances. It requires that the sequence observed by the ultimate receiver

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 158 of 191

The Reliable Messaging (RM) Model describes a protocol for reliably exchanging a sequence of messages. The specification only deals with the contents and behavior of messages as they appear "on the wire".

The RM Source requests the creation of a Sequence by sending a CreateSequence message. The RM Destination replies with a CreateSequenceResponse message which assigns the unique Sequence Identifier. It is possible to establish a reverse sequence when creating a sequence.

Each message that requires reliable delivery assurance includes a Sequence header block. The Sequence header block contains a unique Identifier for the Sequence, and each message in the Sequence is assigned a unique MessageNumber which starts at 1 and increases monotonically by one for each subsequence message in the Sequence. The RM Destination acknowledges successful receipt of messages by including a SequenceAcknowledgement header block, reporting in messages sent back to the RM Source on the full range of received messages in the Sequence. This allows for the acknowledgements to be sent unreliably, as the information about a particular message is carried in each and every SequenceAcknowledgement message.

The RM Source identifies the last message in a Sequence by including a LastMessage child element in the Sequence header block. At any time during the life of a Sequence, the RM Source can send a message containing an AckRequested header block which requests that the RM Destination send a SequenceAcknowledgement.

Once the RM Source has received acknowledgement that all of the messages within a Sequence have been successfully received, it sends a TerminateSequence message, terminating the Sequence. As soon as the RM Destination receives the TerminateSequence message, it can safely discard all of the state related to that Sequence.

WS ReliableMessaging Policy 1.1 is a related standard. To enable an RM Destination and an RM Source to describe their requirements for a given Sequence, this specification defines a single RM policy assertion that leverages the WS-Policy framework. The RM policy assertion indicates that the RM Source and RM Destination MUST use WS- ReliableMessaging to ensure reliable message delivery.

References 1. OASIS WS ReliableMessaging 1.1, http://docs.oasis-open.org/ws- rx/wsrmp/v1.1/wsrmp.pdf 2. OASIS WS ReliableMessaging Policy 1.1, http://docs.oasis-open.org/ws- rx/wsrm/200702/wsrm-1.1-spec-os-01.pdf

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 159 of 191

4.2.101 WS Resource Framework. OASIS Web Services Resource Framework (WSRF) is a set of Web service specifications developed by the OASIS organization. Together with WS-Notification, these specifications are the basis of implementing OGSA capabilities using Web services. By itself, Web Services are nominally stateless, so the main and fancy goal of WSRF is to provide Web Services with a standard and complete way to access and manage states. For this reason Web Services implemented according to WSRF can have one or more persistent states. This feature is mainly achieved by specifying, inside the request, the resource that should be used (e.g. incapsulated within the WS-Addressing endpoint reference) and a set of properties for it. These properties could be used to manage resource states. One of the domain in which states result very useful is the Grid environment. This technology is widely accepted and critical for developing systems which are compliant with OGSA. WSRF has many implementations. So, it should be considered quite mature, but its implementation come out mainly of research projects. Probably the most used is Globus Toolkit 4.

References

1. Web Services Resource Framework (WSRF) v1.2, http://www.oasis- open.org/specs/index.php#wsrfv1.2 2. The WS-Resource Framework, http://www.globus.org/wsrf/

Other Links

Globus Toolkit, http://www.globus.org/toolkit/

4.2.102 WS Resource Transfer (WS-RT). W3C WS-Resource Transfer (WS-RT) is a specification intended to form an essential core component of a unified resource access protocol for the Web services space [1]. The main purpose is to support accessing and operating with stateful resources, where a resource is defined as a Web service that is addressable by an endpoint reference as defined in WS- Addressing [3] and that can be represented by an XML document. Essentially, it provides a set of extensions [2] to the former WS-Transfer specification. Beyond the original ―get‖, ―put‖, ―create‖, and ―delete‖ operations on resources provided in WS-Transfer, WS-RT expands on those and adds the functionality of fragment access. This allows an operation to access a small part of the overall XML resource representation. The main competitor of the former WS-Transfer was the OASIS standard Web Services Resource Framework (WS-RF) [4], and the producer of WS-RT looked at WS-RF to find functionalities missing in WS-Transfer.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 160 of 191

Looking at WS-RF, they have been also able to provide the basics needed to ease the migration from either of these older standards to the new WS-RT. Since WS-RT is just an enhancement of WS-Transfer, WS-Transfer functionality is preserved as it was originally. The fragment access, instead, was a functionality provided in WS-RF but not available in WS-Transfer. There has also been a reworking of the metadata support to partially provide for a lifetime element that contains information similar to what is defined in the WS- ResourceLifetime specification [5]. WS-Resource Transfer is currently in the status of Working Draft of the W3C consortium and main editors are from IBM (involved also in the WS-RF TC), Oracle and Avaya. WS-RT seems to combine the benefits of the WS-Transfer, namely compliance with the basic WS standard and minimum functionalities, with the relevant capability provided by WS-RF on operating on a resource fragments. WS-RT is a specification that is relevant for NEXOF-RA since it helps in implementing mechanism to associate state to web services as well as managing the lifecycle of a resource. This mechanism are indeed required to support the management of infrastructure and computational resources as services, thus required to partially address the Resource key concern of the NEXOF-RA.

References

1. Web Service Resource Transfer specifications, Draft 29 September 2009 http://www.w3.org/TR/ws-resource-transfer/ 2. http://www.w3.org/TR/ws-resource-transfer/#Transferextensions 3. WS-Addressing, http://www.w3.org/Submission/ws-addressing/ 4. http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wsrf 5. http://docs.oasis-open.org/wsrf/wsrf-ws_resource_lifetime-1.2-spec-os.pdf

4.2.103 WS SecureConversation Specification. OASIS This OASIS specification[1] defines extensions that build on WS-Security to provide a framework for requesting and issuing security tokens, and to broker trust relationships. The mechanisms defined in WS-Security provide the basic mechanisms on top of which secure messaging semantics can be defined for multiple message exchanges. This specification defines extensions to allow security context establishment and sharing, and session key derivation. This allows contexts to be established and potentially more efficient keys or new key material to be exchanged, thereby increasing the overall performance and security of the subsequent exchanges. The WS-Security specification focuses on the message authentication model. This approach, while useful in many situations, is subject to several forms of attack (see Security Considerations section of WS-Security specification). Accordingly, this specification introduces a security context and its usage. The context authentication model authenticates a series of messages thereby addressing these shortcomings, but requires additional communications if authentication happens prior to normal application exchanges. NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 161 of 191

References

1. Home, http://www.oasis-open.org/committees/ws-sx

4.2.104 WS Security. OASIS OASIS' WS-Security[1] is a communications protocol providing a means for applying security to Web Services, offering end-to-end security between web services. The risk from using WS Security is so low that there is little to say in the way of contingency. WS- Security has almost no competitor in WS security domain. So,to choose an alternative could bring to build an XML schema from scratch. Actually, there isn't a real alternative to WS-Security for protecting web service messages. During implementation, the substantial alternative is to ignore message-based security and fully depend on transport level security (e.g. TLS and SSL). That approach supports scenarios where all web service endpoints are exposed on the Internet, providing privateness protection. Conversely, it fails in scenarios where messages are routed into corporate networks or where security requirements (such as integrity protection, authentication and access control) need to be satisfied. At the present, the main problem, in WS-Security adoption, is not related to the standard itself (quite mature, well-adopted and kept up by many large players), but to the inadequacy of support due to young implementations. That implies to carefully select web service stacks which provide sufficient level of support for WS-Security, in order to avoid to much manual work to allow simple scenarios to work.

References

1. Home, http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss

4.2.105 WS Transactions (WS-TX). OASIS It is also a standard included in the OASIS consortium[1]. The purpose of the OASIS WS- TX technical committee is to define a set of protocols that allow to coordinate the outcomes of distributed applications based on Web Services. The proposed specifications (WS-Coordination, WS-AtomicTransaction and WS- BusinessActivity) provide a framework to coordinate the executions of individual Web services into reliable applications. These specifications have been created jointly by Microsoft, IBM, and BEA and rely on the existing Web services specifications: WSDL, SOAP, and WS-Security. WS-TX is quite similar to WS-CAF (it is based on WS-CAF). However the OASIS consortium decided to join both committees and let WS-TX as the standard for transactions in Web Service infrastructures. 4.2.105.1 WS Coordination The WS-Coordination specification describes an extensible and generic framework for providing protocols that coordinate the actions of distributed applications. Such NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 162 of 191

coordination protocols are used to support a number of applications, including those that need to reach consistent agreement on the outcome of distributed activities. The framework defined in this specification enables an application service to create a context needed to propagate an activity to other services and to register for coordination protocols. The framework enables existing transaction processing, workflow, and other systems for coordination to hide their proprietary protocols and to operate in a heterogeneous environment. The main components of the coordination framework are: an activation service, which helps create a new activity; a registration service, to register an activity's participants; and a coordination service, to process an activity's completion. The relationship of these components with the specific coordination protocols is depicted in the following figure:

The following is a little example showing how a Web Service uses the framework. An application contacts the activation service to create an activity, which is identified by a coordination context. The context is a container (defined by an XML schema) with elements for an ID, a context expiration time, the coordination type (the coordination protocols to be used), and other extensible elements. Web services that participate in the same activity receive application messages with the context attached. Web services then use the context to identify the registration service and register as participants to the original activity. The coordination service controls the completion of the activity, based on the selected coordination protocol. 4.2.105.2 WS Atomic Transaction The WS-AtomicTransaction specification provides the definition of the Atomic Transaction coordination type that is to be used with the extensible coordination framework described in WS-Coordination. It is similar to the ACID protocol of WS-CAF. The WS-AtomicTransaction specification provides the following coordination protocols:

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 163 of 191

Completion: The completion protocol initiates commitment processing. Based on each protocol's registered participants, the coordinator begins with Volatile 2PC then proceeds through Durable 2PC. The final result is signaled to the initiator. Two-Phase Commit (2PC): The 2PC protocol coordinates registered participants to reach a commit or abort decision, and ensures that all participants are informed of the final result. The 2PC protocol has two variants: o Volatile 2PC: Upon receiving a Commit notification in the completion protocol, the coordinator begins the prepare phase of all participants registered for the Volatile 2PC protocol. All participants registered for this protocol must respond before a Prepare is issued to a participant registered for Durable 2PC. Further participants may register with the coordinator until the coordinator issues a Prepare to any durable participant. A volatile recipient is not guaranteed to receive a notification of the transaction's outcome. Participants managing volatile resources such as a cache should register for this protocol. o Durable 2PC: After receiving a Commit notification in the completion protocol and upon successfully completing the prepare phase for Volatile 2PC participants, the root coordinator begins the Prepare phase for Durable 2PC participants. All participants registered for this protocol must respond Prepared or ReadOnly before a Commit notification is issued to a participant registered for either protocol. Participants managing durable resources such as a database should register for this protocol.

A participant can register for more than one of these protocols by sending multiple Register messages. Developers can use any or all of the protocols defined by the specification when building applications that require consistent agreement on the outcome of short-lived distributed activities that have the all-or-nothing property. 4.2.105.3 WS Business Activity Developers can use these protocols when building applications that require consistent agreement on the outcome of long-running distributed activities. It is similar to WS-CAF's Business Process protocol. This specification also defines two specific Business Activity agreement coordination protocols for the Business Activity coordination types:

BusinessAgreementWithParticipantCompletion: A participant registers for this protocol with its coordinator, so that its coordinator can manage it. A participant must know when it has completed all work for a business activity. BusinessAgreementWithCoordinatorCompletion: A participant registers for this protocol with its coordinator, so that its coordinator can manage it. A participant relies on its coordinator to tell it when it has received all requests to perform work within the business activity.

The WS-BusinessActivity specification provides the definition of two Business Activity coordination types: AtomicOutcome or MixedOutcome, that are to be used with the extensible coordination framework described in the WS-Coordination specification. A

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 164 of 191

coordinator for an AtomicOutcome coordination type must direct all participants to close or all participants to compensate. A coordinator for a MixedOutcome coordination type may direct each individual participant to close or compensate.

References

1. OASIS Committee, http://www.oasis- open.org/committees/tc_home.php?wg_abbrev=ws-tx 2. WS-Coordination (v 1.1), http://docs.oasis-open.org/ws-tx/wscoor/2006/06 3. WS-Atomic Transaction (v 1.1), http://docs.oasis-open.org/ws-tx/wsat/2006/06 4. WS-Business Activity (v 1.1), http://docs.oasis-open.org/ws-tx/wsba/2006/06

Other Links

JBossTransactions. An open-source product compliant with WS-TX, http://wiki.jboss.org/wiki/JBossTransactions

4.2.106 WS Transfer. W3C WS-Transfer specifies the means to make web services stateful (compare to WS-RF). However, unlike WS-RF, WS-Transfer satisfies compliance with the basic rules of the web services community, that is to say it keeps functionality to a minimum, therefore allowing mixing of different specifications to extend functionalities. WS-Transfer expects one single state set for each EPR (endpoint reference, compare to WS-Addressing), i.e. no individual parameters may be altered. Therefore, the basic methods recommended by WS-Transfer are creation, editing and destruction of a resource as a whole. Major keyplayer in WS-Transfer is represented by Microsoft. Of course, this implies uptake at least from the Windows platform which reference implementations already show. As pointed out, WS-Transfer is a competitor of WSRF and both specification sets are fully incompatible. There are current efforts working to form an alliance between WSRF and WS-Transfer in order to lead to a common specification. In the meanwhile, using a minimum of functionalities, it should be more easy to keep interoperability between implementation of these two specifications. Until this moment, implementations of WS- Transfer seem to be a bit more stable than WS-RF ones. Anyhow, WS-Transfer adoption carries the risk that WS-RF may find larger acceptance and uptake in the eBusiness community.

References

1. Web Services Transfer (WS-Transfer), http://www.w3.org/Submission/WS-Transfer/

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 165 of 191

4.2.107 WS Trust. OASIS OASIS' WS-Trust[1] can be used for exchanging security credentials between parties using secure messaging capabilities of the WS-Security. WS-Trust is lightly less mature than WS-Security. There are many reasons to suggest WS-Trust (cf. WS-Security): it is backed by major players and is available in WCF. If WS-Trust is not adopted, creation of custom specification or adoption of another one should be riskier then simply adopting WS-Trust.

References

1. Specification, http://docs.oasis-open.org/ws-sx/ws-trust/200512

4.2.108 Xforms. W3C Traditional HTML Web forms don't separate the purpose from the presentation of a form. XForms 1.0 [1], in contrast, are comprised of separate sections that describe what the form does, and how the form looks. This allows for flexible presentation options, including classic XHTML forms, to be attached to an XML form definition. The following illustrates how a single device-independent XML form definition, called the XForms Model, has the capability to work with a variety of standard or proprietary user interfaces:

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 166 of 191

The XForms User Interface provides a standard set of visual controls that are targeted toward replacing today's XHTML form controls. These form controls are directly usable inside XHTML and other XML documents, like SVG. Other groups, such as the Voice Browser Working Group, may also independently develop user interface components for XForms.

An important concept in XForms is that forms collect data, which is expressed as XML instance data. Among other duties, the XForms Model describes the structure of the instance data. This is important, since like XML, forms represent a structured interchange of data. Workflow, auto-fill, and pre-fill form applications are supported through the use of instance data. Finally, there needs to be a channel for instance data to flow to and from the XForms Processor. For this, the XForms Submit Protocol defines how XForms send and receive data, including the ability to suspend and resume the completion of a form. The following illustration summarizes the main aspects of XForms:

Key Goals of XForms

Support for handheld, television, and desktop browsers, plus printers and scanners Richer user interface to meet the needs of business, consumer and device control applications Decoupled data, logic and presentation Improved internationalization Support for structured form data Advanced forms logic Multiple forms per page, and pages per form

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 167 of 191

Suspend and Resume support Seamless integration with other XML tag sets

There are also a candidate reommendation of XForms 1.1 [2]

References

1. XForms 1.0, http://www.w3.org/TR/2007/REC-xforms-20071029/ 2. XForms 1.1 W3C Candidate Recommendation, http://www.w3.org/TR/2007/CR- xforms11-20071129/

4.2.109 XHTML 1.1. W3C The most recent XHTML W3C Recommendation is XHTML 1.1[1], which is a reformulation of XHTML 1.0 Strict, with minor modifications, using a set of modules selected from a larger set defined in Modularization of XHTML, a W3C Recommendation which provides a modularization framework, a standard set of modules, and various conformance definitions. All deprecated features of HTML, such as presentational elements and framesets, and even lang and anchor name attributes, which were still allowed in XHTML 1.0 Strict, have been removed from this version. Presentation is controlled purely by Cascading Style Sheets (CSS). This version also allows for ruby markup support, needed for East-Asian languages (especially CJK). The purpose of XHTML 1.1 to serve as the basis for future extended XHTML 'family' document types, and to provide a consistent, forward-looking document type cleanly separated from the deprecated, legacy functionality of HTML 4 that was brought forward into the XHTML 1.0 document types. XHTML 1.1 is most similar to XHTML 1.0 Strict, built using XHTML Modules. This means that many facilities available in other XHTML Family document types (e.g., XHTML Frames) are not available in this document type. These other facilities are available through modules defined in XHTML Modularization, and document authors are free to define document types based upon XHTML 1.1 that use these facilities. Although Modularization of XHTML allows small chunks of XHTML to be re-used by other XML applications in a well-defined manner, and for XHTML to be extended for specialized purposes, XHTML 1.1 adds the concept of a "strictly conforming" document: such a document cannot employ such features—it must be a complete document containing only elements defined in the modules required by XHTML 1.1. For example, if a document is extended by using elements from the XHTML Frames (frameset) module, it may still be described as XHTML 1.1, but not strictly conforming XHTML 1.1. Instead, it might be described as an XHTML Host Language Conforming Document, if the relevant criteria are satisfied. There are also a working version of XHTML 1.1[2]

References NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 168 of 191

1. XHTML 1.1: Module-based XHTML, http://www.w3.org/TR/2001/REC-xhtml11- 20010531/ 2. XHTML 1.1: Module-based XHTML - Second Edition, http://www.w3.org/TR/2007/WD-xhtml11-20070216

4.2.110 XHTML 2.0. W3C XHTML 2.0 [1] is a general-purpose markup language designed for representing documents for a wide range of purposes across the World Wide Web. To this end it does not attempt to be all things to all people, supplying every possible markup idiom, but to supply a generally useful set of elements. The main changes defined in the working draft, compare to XHTML 1.1 are: HTML forms will be replaced by XForms. (see XForms) HTML frames will be replaced by XFrames. The DOM Events will be replaced by XML Events, which uses the XML Document Object Model. (see XML Events)

In designing XHTML 2.0, a number of design aims were kept in mind to help direct the design:

As generic XML as possible: if a facility exists in XML, try to use that rather than duplicating it. Less presentation, more structure: use style sheets for defining presentation. More usability: within the constraints of XML, try to make the language easy to write, and make the resulting documents easy to use. More accessibility: some call it 'designing for our future selves' – the design should be as inclusive as possible. Better internationalization: since it is a World Wide Web. More device independence: new devices coming online, such as telephones, PDAs, tablets, televisions and so on mean that it is imperative to have a design that allows you to author once and render in different ways on different devices, rather than authoring new versions of the document for each type of device. Less scripting: achieving functionality through scripting is difficult for the author and restricts the type of user agent you can use to view the document. We have tried to identify current typical usage, and include those usages in markup. Integration with the Semantic Web: make XHTML2 amenable for processing with semantic web tools.

References

1. XHTML 2.0, http://www.w3.org/TR/2006/WD-xhtml2-20060726/

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 169 of 191

4.2.111 XML Events. W3C The XML Events module [1] provides XML languages with the ability to uniformly integrate event listeners and associated event handlers with Document Object Model (DOM) Level 2 event interfaces [DOM2EVENTS]. The result is to provide an interoperable way of associating behaviors with document-level markup. An event is the representation of some asynchronous occurrence (such as a mouse click on the presentation of the element, or an arithmetical error in the value of an attribute of the element, or any of unthinkably many other possibilities) that gets associated with an element (targeted at it) in an XML document. In the DOM model of events [DOM2EVENTS], the general behavior is that when an event occurs it is dispatched by passing it down the document tree in a phase called capture to the element where the event occurred (called its target), where it then may be passed back up the tree again in the phase called bubbling. In general an event can be responded to at any element in the path (an observer) in either phase by causing an action, and/or by stopping the event, and/or by cancelling the default action for the event. The following diagram illustrates this:

XML events are the method of binding a DOM level 2 event at an element to an event handler. They encapsulate various aspects of the DOM level 2 event interface, thereby providing markup-level specification of the actions to be taken during the various phases of event propagation.

References

1. XML Events, http://www.w3.org/TR/2003/REC-xml-events-20031014/

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 170 of 191

4.2.112 XML Process Definition Language (XPDL). WfMC XML Process Definition Language (XPDL) is an XML based language specification to describe workflow automation supported by a workflow management system (WfMS). This specification has been produced by the Workflow Management Coalition (WfMC) an it is an evolution of a former language called Workflow Process Definition Language (WPDL). XPDL aims at supporting process definition interchange between WfMS tools, like editors, engines, etc. References 1. XPDL 2.0, http://www.wfmc.org/standards/XPDL.htm

Other links Workflow Management Coalition, http://www.wfmc.org/

4.3 Products

4.3.1 Active BPEL Active BPEL Community Edition Engine is an OS implementation of OASIS BPEL v 2.0 engine, developed by Active Endpoints company and released under GPL license. It also implements both BPEL4People and WS-Human Task specifications. Accompanying tool is ActiveBPEL Community Edition Designer, and Eclipse-based development environment. Active BPEL engine also includes an administration console from which process administrators can manage the engine and the processes deployed and executed within it. Supported operations ranges: engine startup/shutdown, engine configuration, engine performance tunning, process execution management, process execution graphical monitoring, persistence storage, diagnosis, etc. Active BPEL Community Edition Engine includes a large collection of BPEL examples to getting started using the engine. However, documentation accompanying OSS version is a bit scarce. Experiences with previous versions of Active BPEL engine in SeCSE project have not been very fruitful.

References 1. Active BPEL Community Edition 5.0, http://www.activevos.com/community-open- source-engine-download.php#final50 2. Active BPEL home page, http://www.activevos.com/community-open-source.php

Other links Active BPEL examples, http://www.activebpel.org/samples/samples-4/samples.php

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 171 of 191

Active BPEL VOS documentation (including Active BPEL Community Edition), http://www.activebpel.org/infocenter/ActiveVOS/v50/index.jsp

4.3.2 Amazon Web Services Amazon Web Services (AWS) [1] provides the computing platform of Amazon as a service to external companies. AWS provides a set of services that together form a reliable, scalable, and affordable platform for application development. Elastic Compute Cloud (EC2) [2] is the main commercial web service provided by Amazon. EC2 offers to its clients resizable computing capacity in the computer cloud owned by Amazon. Their objective is to provide an easy-to-manage scalable computer infrastructure to deploy applications in which the application provider pays only for the resources that actually consume, like instance-hours or data transfer. EC2 allows managing the complete virtual computing environment through the use of web service interfaces. Through these interfaces, the application provider is able to create images containing a ready-to-use custom application, add machines to the virtual environment, load them with a previously-created application image, manage access permissions, … With regard to reliability and scalability, EC2 offers the Amazon's highly reliable and powerful network infrastructure and data centres to run the virtual environments rented by the application providers. Failed instances can be replaced quickly and reliably commissioned in the virtual environment. Moreover EC2 provides developers the tools to build failure resilient applications and isolate themselves from common failure scenarios; it provides the ability to place instances in multiple locations geographically dispersed to protect applications from failures of a single location. In order to redirect request to the different locations, EC2 offers the so-called elastic IP addresses. These IP addresses allow the developers to mask instance or zone failures by programmatically remapping the public IP addresses to any other instance. In addition to instance replacement when failures occur, the application providers can increase or decrease capacity of their configurations when necessary within minutes. In addition to the computing facilities of EC2, Amazon also offers storage an data management options for application providers through the Amazon Simple Storage Service (Amazon S3) [3] and SimpleDB [4] respectively. These solutions have been designed to complement EC2 virtual computing facilities. However, with this approach, the ability to guarantee consistency in the data managed by applications in the cloud, the ability to scale-out the application and the ability to recover applications from failures seem to rely on the skills of the application developers using the Amazon's EC2 APIs and not in the features provided by the virtual environment itself. That is, Amazon's EC2 does not offer a solution to achieve state/data consistency, high availability or scalability for applications running in a distributed environment, but only the means to achieve them. RightScale [5] a company that provides a service that enables companies to create web solutions running on Amazon EC2 that are reliable, scalable, and offer high performance. It offers an advance console through the web that eases the management of the underlying cloud computing infrastructure (e.g. management console for databases, auto- NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 172 of 191

scaling facilities…). Very recently (Jan 2009), Amazon has announced a web console to manage its infrastructure. The difference is that the current Amazon‘s EC2 console is a control panel that gives you access to all EC2 functions while RightScale is a complete management platform to design, deploy, and manage the life-cycle of mission critical cloud deployments.

References

1. Amazon Web Services, http://aws.amazon.com/ 2. Amazon EC2, http://aws.amazon.com/ec2/ 3. Amazon Simple Storage Service, http://aws.amazon.com/s3/ 4. Amazon Simple DB, http://aws.amazon.com/simpledb/ 5. RigthScale, http://www.rightscale.com/

4.3.3 Google Application Engine With the introduction of Google Application Engine (GAE) [1] in the early 2008 Google has gone one step further than Amazon's EC2 in multi-tier application development. GAE lets the programmers develop (through a SDK) and run web applications on Google's infrastructure. Developers can code their applications through a SDK in a local development environment that simulates Google App Engine on their own computers, and then they are able to deploy the application in the remote Google's servers. Google claims that this avoids to its users to deal with server maintenance issues and allow their applications to stay always available and be scaled-out. The current language used in GAE is Python. In the near future it will be possible to use other languages. GAE provides the following features; dynamic web serving, with full support for common web technologies; persistent storage with queries, sorting and transactions; automatic scaling and load balancing. To ensure application availability, any GAE application runs on many web servers simultaneously. A client request can go to any web server (load-balancing cluster), and multiple requests from the same user may be handled by different web servers. Each application runs in a restricted ―sandbox'‖ environment. In this environment, the application can execute code and use the services that GAE provides (datastore, mail, accounting, URL fetch, image manipulation, logging...). The core service of GAE is its scalable transactional datastore. It stores and performs queries over data objects, known as entities, that are quite similar to the entity beans of J(2)EE. An entity has one or more properties, named values of one of several supported data types. A property can be a reference to another entity, to create one-to-many or many-to-many relationships with other entities. Despite the datastore provides the similar features as relational databases, it is not a relational database. Unlike traditional databases, the datastore uses a distributed architecture to manage scaling to very large data sets. A GAE application can optimize how data is distributed by describing relationships between data objects, and by defining indexes for queries.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 173 of 191

In order to deal with concurrent accesses of users to the same data, datastore supports transactions. The datastore can execute multiple operations in a single transaction, and roll back the entire transaction if any of the operations fail. It uses optimistic locking for concurrency control. An update of a entity occurs in a transaction that is retried a fixed number of times if other processes are trying to update the same entity simultaneously. The datastore implements transactions across its distributed network using ―entity groups‖. Each transaction is limited to manipulate only entities that belong to the same single group. Entities of the same group are stored together for efficient execution of transactions. Applications can assign entities to groups when the entities are created. GAE also provides a cache service called Memcache. Memcache implements a high performance in-memory key-value cache that is accessible transparently by multiple instances of the same application. Memcache is useful for storing data that does not need persistence and require high-speed access (what is similar to the stateful data that is usually stored in J(2)EE's SFSBs). However, the cache does not provide transactional features, so the concurrent accesses to this resource can cause problems if the accesses to data are not controlled specifically by the developer. Moreover, since GAE is such a controlled and limited environment, some other drawbacks arise. Accesses to external computers are only allowed through the provided URL fetch and email services and APIs and only incoming requests through HTTP and HTTPS protocols are accepted. GAE does not allow writing to the file system (all persistent data must be written into the datastore). As it happens with the J(2)EE technology, the GAE applications cannot spawn sub-processes or threads. Client request that take long time to respond are automatically terminated to avoid overloading the web server. This means that long running activities are not allowed in GAE.

References

1. GAE, http://code.google.com/appengine/

4.3.4 Google BigTable Google‘s BigTable [1] aims to build a more application-friendly storage service using as building blocks other projects and techniques developed at Google such as the Google Distributed File System and Map/Reduce. Google defines BigTable as ―a sparse, distributed, persistent multidimensional sorted map‖. It was designed with the following goals in mind: 1. Wide applicability. It is used by more than sixty products inside Google. 2. Scalability. Designed to reliably scale to petabytes of data and thousand of machines. 3. High performance. The access to the contents of the table is very fast. 4. High availability. Data is replicated in multiple servers to keep them always available.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 174 of 191

The table is indexed through three elements: 1) a row key, 2) a column key and 3) a timestamp. The contents of the indexed element is an unstructured array of bytes. The timestamps allow having multiple versions of the same data. The structure of a table is as follows. A table is composed of multiple tablets. Each tablet contains a range of rows of the complete table and they do not overlap. Each tablet is built out on multiple SSTables. A SSTable is an inmutable sorted file of key-value pairs that contains chunks of data plus an index of block ranges. SSTables can be shared by two different Tablets. This structure is distributed in multiple nodes. As in GFS, there is a master server responsible for load balancing and fault tolerance and a set of tablet servers that manage a set of tablets, handling read and write requests to the tablets that it has loaded. The master uses a service called Chubby, for a variety of tasks: 1. to monitor that there is at most one active master (of BigTable) at any time 2. to monitor health of tablet servers and restarting them if a failure occurs 3. to store the bootstrap location of BigTable data 4. to discover tablet servers and shutdown dead tablet servers 5. to store BigTable schema information 6. to store access control lists Moreover the master uses GDFS to replicate data across a set of nodes. Additionally, when updates on rows occur, they are applied in-memory, storing a log file also in the GFS to react to failures. BigTable can also be used with MapReduce by means of wrappers which allow to be used as an input source and as an output target for MapReduce jobs. HBase [2] is the open-source version of BigTable modelled after the features of Big table were published. HBase provides BigTable-like capabilities on top of Hadoop Core. The data model and the architecture used by HBase is similar to the one used in BigTable. References

1. BigTable Paper, http://labs.google.com/papers/bigtable.html 2. Hadoop BigTable, http://hadoop.apache.org/hbase/

4.3.5 Google File System Google File System (GFS) [1] is a scalable distributed file system for large distributed data- intensive applications. It was designed to satisfy the requirements of storage in Google, so it provides fault-tolerance and allows efficient and high performance access to high volumes of data using large clusters of commodity hardware. There are two types of nodes: one Master node and a large number of Chunkservers. On the one hand, the chunkservers store the data files, with each individual file broken up into fixed size chunks (hence the name) of 64 megabytes, similar to clusters or sectors in

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 175 of 191

regular file systems. Thus, the chunkservers are responsible for performing the read and write operations on these large chunks of data. The Master node stores all the metadata associated with the chunks (e.g. the tables mapping the 64-bit labels to chunk locations and the files they make up, the locations of the copies of the chunks, what processes are reading or writing to a particular chunk). This metadata is updated by the Master server periodically receiving heart-beat messages from each chunk server. GFS includes a garbage collection mechanism that allows the master node to lazily remove the files from the storage. When a file is removed, its contents are not available anymore for the users, but the file is still preserved in the storage for a configurable period if there is enough space. This allows recovering a removed file if required. GFS provides fault tolerance by means of the following techniques and properties: 1) monitoring constantly the parts of the file system structure, 2) replicating crucial data (both, metadata and chunks) and 3) providing a fast and automatic recovery mechanism when failures occur in the nodes. A monitoring infrastructure outside GFS is constantly monitoring the state of the components of the file system. For example this infrastructure is responsible to detect failures in the master node in order to start a new master process in other node. Thus, the master node is replicated for reliability, because it contains the key data that allows to access the rest of the file system. The chunks are also replicated in multiple chunkservers spread across multiple racks. Chunks replicated on 3 machines, and the master is responsible for ensuring replicas exist. Chunk replication allows to tolerate chunkserver failures. The frequency of these failures motivated a novel online repair mechanism that regularly and transparently repairs the damage and compensates for lost replicas as soon as possible. Finally, as at the lower level of the system infrastructure there are involved multiple physical disk units, GFS also includes a checksumming mechanism that allows detecting data corruption at the disk or IDE subsystem level. The Hadoop Distributed File System (HDFS) [2] is an open-source implementation of GFS. It is based on the Hadoop Map/Reduce and is included in the Hadoop Core project. Thus, HDFS is a distributed file system designed to stores large files across a cluster of multiple nodes. The main assumptions and objectives that are pursued by HDFS are: 1. Detection of failures and quick, automatic recovery from them. This is a main architectural goal of HDFS, because in a file system distributed in multiple machines, hardware failures are the norm rather than the exception. 2. Emphasis on high throughput rather than on low latency. The applications running on the HDFS are not standard applications and need streaming data access to their data sets. 3. Emphasis in large files. The HDFS is tuned to support large files. 4. Simple Coherency Model. Most of the files managed by the HDFS follow the model write-once-read-many, so the data coherency issues are simplified. 5. Follow the ―Moving Computation is Cheaper than Moving Data‖ philosophy. HDFS provides interfaces for applications to move themselves closer to where the data is located. NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 176 of 191

6. Portability Across Heterogeneous Hardware and Software Platforms. HDFS is based in the Java programming language, so it is easily portable from one platform to another. As the GFS, HDFS is built from a cluster of data nodes, each of which serves up blocks of data over the network using a block protocol specific to HDFS. The data nodes can talk to each other in order to perform specific actions such as rebalance data, move copies of the data between nodes or keep the replication of data high. Additionally, HDFS can also serve data over the HTTP protocol, what allows accessing all content from a web browser. In a similar way to GFS, a HDFS file system requires a unique server, the name node, to manage the set of data servers. The name node is a single point of failure for a HDFS installation, so as the GFS, reliability is provided by replicating the name node and data across multiple nodes. The default replication value for data nodes is 3, what means that data is stored on three nodes: two on the same rack, and one on a different rack. References

1. GFS Paper, http://labs.google.com/papers/gfs.html 2. Hadoop Filesystem, http://hadoop.apache.org/core/docs/r0.19.0/hdfs_design.html

4.3.6 Google MapReduce Current computer systems in big companies must deal with lots of data and lots of incoming requests. For many scenarios, either there are no commercial systems big enough to deal with such requirements or they are not affordable for many companies because of the cost. In order to solve these problems, there exist hundreds of implementations of special- purpose computations that have been designed to process large amounts of data using clusters of commodity machines. Many of these implementations have to deal with the same problems that are basically parallelization, fault-tolerance, data distribution and load balancing. MapReduce [1] is a programming model enacted by Google that tries to simplify the development of applications that process large amounts of data using distributed and parallel computation. The programming model is based on the Map/Reduce primitives present in functional languages (e.g. LISP). The Map primitive processes a set of input key/value pairs and computes a set of intermediate key/value pairs. The Reduce primitive combines all the intermediate values that share the same key and produces a set of merged output values (usually just one per key). The Map/Reduce primitives are defined by the users specifically for each application. The MapReduce programming model offers several opportunities for parallelize the different computations: 1. Parallel map over input. Instead of processing the input data (key/value pairs) one by one, it is well known that this pattern of a list map is amenable to total data NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 177 of 191

parallelism. In principle, the list map may be executed in parallel at the granularity level of single elements. Map must be a pure function so that the order of processing key/value pairs does not affect the result of the map phase and communication between the different threads can be avoided. 2. Parallel grouping of intermediate data. The grouping of intermediate data by key, as needed for the reduce phase, is essentially a sorting problem. If we assume a distributed map phase, then it is reasonable to anticipate grouping to be aligned with distributed mapping. That is, grouping could be performed for any fraction of intermediate data and distributed grouping results could be merged centrally. 3. Parallel map over groups. Reduction is performed for each group (which is a key with a list of values) separately. Again, the pattern of a list map applies here; total data parallelism is admitted for the reduce phase— just as much as for the map phase. 4. Parallel reduction per group. Reduce is itself an operation that collapses a list into a single value by means of an associative operation and its unit. Then, each application of Reduce can be massively parallelized by computing sub-reductions in a tree-like structure while applying the associative operation at the nodes. If the binary operation is also commutative, then the order of combining results from sub- reductions can be arbitrary. Google claims that MapReduce is highly scalable and is present in multiple applications of the Google infrastructure that are running on a large cluster of commodity machines. Moreover, programmers find the system easy to use. The open-source Hadoop project [2] from Apache provides through the Hadoop Core sub- project, support for the MapReduce programming model. Hadoop Map/Reduce [3] provides a framework for easily writing applications which process vast amounts of data (multi- terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner. Applications specify the input/output locations and supply Map and Reduce functions via implementations of appropriate interfaces. References

1. MapReduce Paper, http://labs.google.com/papers/mapreduce.html 2. Hadoop, http://hadoop.apache.org/ 3. Hadoop MapReduce Tutorial, http://hadoop.apache.org/core/docs/current/mapred_tutorial.html

4.3.7 IRS3 IRS Internet Reasoning Service, is a Semantic Web Services framework, developed by Knowledge Media Institute (KMI) of Open University, used by application to semantically describe and execute WS. IRS released two versions of IRS, II and III. The latest one is been used by WSMO Working group and by some ongoing EC founded projects like Super or Luisa. IRS III architecture is depicted in the picture below. IRS III consists of a IRS Server, an IRS Publisher and a IRS client, all communicating each other through WS standards. NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 178 of 191

IRS Server holds descriptions of SWS and mappings between those semantic descriptions and specific WS. IRS Publisher, besides creating those mappings within the IRS Server, also creates wrappers that support to invoke not only legacy WS but also legacy Java code. IRS Client is a capability driven WS invocation client. WS consumers need only to describe the goal (or task) to be achieved, and the IRS Client selects and invokes the appropriate WS. IRS III is an evolution of IRS II adding support WSMO-based SWS. IRS II, backgrounded by IBROW project make use of knowledge descriptions in form of ontologies: representing domain and task models, domain models: describing the application realm, task models: describing tasks to be performed by services to satisfy consumer's requests, problem solving methods (PSM) describing reasoning schema applied to satisfy tasks.

References 1. IRS3, http://kmi.open.ac.uk/projects/irs/papers/IRS-III_JWS_revised_final.pdf

Other links: IRS3 project: http://kmi.open.ac.uk/projects/irs/

4.3.8 JBOSS jBPM jBPM is a JBoss platform that supports the execution of Business Processes (BPs) described by languages ranging from BPM (jPDL) to service orchestration (BPEL4WS). jBPM leverages on an underlaying technology: Process Virtual Machine (PVM) to support process execution described using several languages, since PVM hides process description particularities. Current process languages supported are: jPDL: a JBoss proprietary BPM language, BPEL4WS: a language for service orchestration, Seam Pageflow: a Web Pages graphical definition language for SEAM applications. JBPM could incorporate support for new process languages when requested. However, jBPM still doesn't support XPDL, what is considered a flaw, since XPDL is consider by many as the de-facto BPM executable language. jBPM has been conceived to require minimal dependencies, therefore, it can be packaged together with any Java application, or within a J2EE clustered servers, depending on NFRs like scalability, high availability, high reliability and so on. NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 179 of 191

jBPM BPEL supports BPEL4WS 1.1/2.0. For instance, BPEL4WS processes authored by Eclipse BPEL Designer or other BPEL4WS editors can be deployed and executed within jBPM BPEL engine. Below picture depicts jBPM modular architecture. jBPM comes with particular engines supporting jPDL, BPEL4WS and Pageflow process languages on top of PVM engine. Besides, jBPM comes with a Graphical Process Designer (GPD): an Eclipse plugin based graphical process authoring tool that only supports jPDL.

References: 1. Jboss jBPM home page: http://www.jboss.com/products/jbpm Other links:

Jboss jBPM community: http://www.jboss.org/jbossjbpm/ Eclipse BPEL Designer project: http://www.eclipse.org/bpel/ Graphical Process Designer: http://www.jboss.org/jbossjbpm/gpd/

4.3.9 NovaBPM: Nova Orchestra/Bonita NovaBPM is a business process management suite comprising Nova Orchestra (open- source BPEL orchestration) and Nova Bonita (open-source workflow solution compliant with XPDL). Bull/OW2 Nova Orchestra is a BPM complete solution, written in Java, supporting execution of BPM processes described in BPEL v2.0 and XPDL. Nova Orchestra includes a BPM engine, a web 2.0 process management console and a graphical BPEL designer. Nova Orchestra suite is released under LGPL license. Orchestra engine relies on the Process Virtual Machine (PVM) technology, developed jointly by Bull and JBoss, which has provided an generic process engine for BPM process based applications, enabling support for multiple languages including BPEL and XPDL. Other orchestra features:

fully integration with PeTALS ESB. lightweight and heavyweight versions, deployable with other applications (ie. Swing based) and J2EE containers. JMX based administration API. Fractal engine design, especially suitable to support complex process definitions and some sort of dynamic process composition.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 180 of 191

Integration with Bonita, an XPDL workflow engine, also developed by Bull/OW2, leveraging in the same Process Virtual Machine technology.

References 1. Orchestra home page: http://orchestra.objectweb.org/xwiki/bin/view/Main/WebHome

Other links: Bonita XPDL engine, http://wiki.bonita.objectweb.org/xwiki/bin/view/Main/ OW2 home: http://www.ow2.org/ Process Virtual Machine (PVM) project, http://wiki.bonita.objectweb.org/xwiki/bin/view/Main/FAQPVM

4.3.10 Salesforce.com Salesforce.com [1] is a company devoted to offer CRM products. The main novelty of Salesforce.com is that the suite of solutions they offer for CRM is based on a cloud computing platform developed by the company. This allows Salesforce.com to offer to their customers a complete CRM solution based on its platform and software as a service (SaaS) facilities, avoiding them to invest in other software to run their businesses. Force.com is a platform for building and deploying enterprise applications to run in the salesforce.com cloud. Salesforce.com claim that using Force.com and their services, applications such as enterprise resource planning (ERP), human resource management (HRM) and supply chain management (SCM) can be developed without any other software in days or weeks instead of months. They provide a powerful yet easy-to-use development model that allows assembling applications, components and code instantly and deploying them in the Salesforce.com infrastructure. Applications are built using Apex, a proprietary Java-like programming language for the Force.com Platform, and Visualforce, an XML-like syntax for building user interfaces in HTML, AJAX or Flex. The main advantages to the developers provided by the model of Force.com are: 1. Multitenancy. Multitenancy is a principle in software architectures in which a single instance of the software runs on the platform of the vendors, serving multiple client organizations (tenants). Multienancy offers built-in security, reliability, upgradeability, and ease of use. 2. Tools and features to speed development such as analytics, offline access, and mobile deployment. 3. Freedom from managing and maintaining any server infrastructure, even as applications scale to tens of thousands of users. Some of these advantages are similar that the ones provided by the Amazon‘s Web Services infrastructure. However, Salesforce.com goes one step beyond, offering not only an easy-to-manage and adaptable infrastructure, but such as Google App. Engine does, the means to build applications on top of it. NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 181 of 191

Moreover, as Amazon and Google do, Salesforce.com offers also database services for applications through Force.com Database Services (http://wiki.apexdevnet.com/index.php/Database_Services), allowing users to create objects to store crucial data, delegating to the platform the management of difficult tasks to ensure that the data is safe and continually available. Additionally, the Force.com platform provides the resources for integrating applications into proprietary environments to access data in other systems, to create mash-ups that combine data from multiple sources, or to include external systems into your processes. This is done through the Force.com API, which provides easy access to all the information stored in a Force.com application through an open, standards-based SOAP Web service. The solution offered by Salesforce is very limited to the use of their services and must be risky to the customers. The customers must rely only on the services provided by Salesforce in order to develop their applications. References

1. SalesForce, http://www.salesforce.com

4.3.11 SeCSE Registry Registries like UDDI and ebXML have several limitations in how users can search for services. These limitations are related to how services are described. The problem of describing and searching services is not a new one and it is an evolution of the problem of describing and searching software components in COTS (Commercial of-the-shelf) components repositories [1]. One of the possible approaches to the problem is the usage of facets [2]. In the early definition, facets are a set of pairs key-value that describe properties of a system including both functional qualities (e.g., data formats supported, functionalities offered, etc.) and non-functional ones (e.g., price, reliability, response time, etc.). Facets allow providers to describe in a structured way the relevant aspects of a software system. Moreover, if a common and meaningful set of key-value pairs is defined, potential users can perform advanced searches inside a repository. Such queries can be more complex than the traditional keyword matching in a plain text description and exploit the additional semantic available in the facets such as values in a specific range or in a pre-defined set. In this way, users can design queries specifying conditions such as the support of a specific set of features, the response time below a specific threshold, the price in a certain range, etc. The ability to find a specific service in a large registry is related to: the quality of the taxonomy used to define the keys and the quality of the values inserted in the description by the provider. Taxonomies allow the definition of proper keys in a specific (and limited) domain area. For this reason, the usage of different taxonomies to cover different domains is a suitable solution to provide extensive support to facets. However, taxonomies are useless if the providers do not use them correctly and do not provide a complete description of their services through them. This approach requires a considerable amount of effort from the provider but is extremely useful form the point of view of the user that is looking for a service.

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 182 of 191

This basic definition of facets is very limited since it is not able to support complex descriptions, relations among the defined attributes, etc. In many cases, the usage of a single value or a set is not enough and some properties need to be described in a more expressive way. For this reason, the concept of facet has evolved to include complex structures based on XML technologies [3][4]. Facets can be described through a set of XML documents. A facet is defined as a set that includes a facet type and one or more facet specifications.

Example of facet structure

A facet type is a label that describes the high-level concept of the facet such as quality of service, interoperability, etc. while the facet specification is the actual implementation of the concept. It is possible that several facet specifications are associated to a single facet type providing different ways of describing the high-level concept. Every facet specification includes two documents: an XML schema that defines the structure and the related XML implementation.

Internal structure of the registry

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 183 of 191

In this system, there are two categories of facets: standard and custom. Standard facets are the pre-defined ones and can be identified through a unique id, in this way there is no need to exchange information regarding the XML schema related to them since they refer to a well known one stored in the system by default. Custom facets are user defined and, for this reason, users have always to specify the XML and the related schema to allow the system and the other users to use the information correctly. This structure of the facets allows users to perform complex queries based on the semantics available in the XML schema definitions. Such queries can be implemented through XPath and XQuery technologies that are standard technologies for the extraction of information from XML documents.

References

1. Clark J., Clarke C., De Panfilis S., Granatella G., Predonzani P., Sillitti A., Succi G., Vernazza T., ―Selecting Components in Large COTS Repositories‖, Journal of Systems and Software, Elsevier, Vol. 73, No. 2, October 2004. G 2. Prieto-Diaz R., Freeman P., ―Classifying Software for Reusability‖, IEEE Software, Vol. 4, No. 1, January 1987 3. Sawyer P., Hutchinson J., Walkerdine J., Sommerville, I., ―Faceted Service Specification‖, Workshop on Service-Oriented Computing: Consequences for Engineering Requirements (SOCCER), Paris, France, 30th August 2005 4. Walkerdine J., Hutchinson J., Sawyer P., Dobson G., Onditi V., ―A Faceted Approach to Service Specification‖, 2nd International Conference on Internet and Web Applications and Services (ICIW'07), Mauritius, May 2007

4.3.12 Triple Space Triple Space (TS) is a technology that provides semantic data persistence over a virtualised single shared space. This technology follows a communication and coordination pattern based on posting (publication) and read semantic data on single single memory shared by several applications or services, exchanging that data. TS integrates both Tuple Space, Semantic Web and WS technologies. TC improves Tuple Space and other memory sharing approaches since it provides them with semantic content and new structures that relates tuples in a scalable way. TC improves WS technologies since it provides them a new asynchronous communication model. TC provides a way WS consumers and providers can exchange information through the publication and reading of semantic content over a globally accessible world-wide (Internet accessible) repository. Besides, TC service scales well because it is hosted by different servers across the WWW network. The providers of data can publish it at any point in time (time autonomy), independent of its internal storage (location autonomy), and independent of the knowledge about potential readers (reference autonomy), and independent of its internal data schema (schema autonomy). TS can offer good support for Semantic WS (SWS) asynchronous communication, however, support for synchronous communication using TC is cumbersome, so TS shouldn't be only considered as a replacement, but as a complementary technology for

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 184 of 191

SWS communication using current WS trends, like ESB. Even with ESB, asynchronous communication is accomplished by using some WS* standards, like WS-Addressing. TS has been integrated within Web Service Execution Environment (WSMX). The integration of WSMX and Triple Space Computing is being done in different aspects: (1) enabling components management in WSMX using Triple Space Computing, (2) allowing external communication grounding in WSMX, (3) providing resource management, and (4) enabling communication and coordination between different inter-connected WSMX systems. In summary, Triple Space Computing acts as a middleware for WSMX, Web Services, different other Semantic Web applications, and users to communicate with each other. TS has been developed in TripCom project. Below picture depicts TripCom Architecture for TS.

TripCom architecture for Triple Spaces

References 1. TRIPCOM project: http://www.tripcom.org/

4.3.13 Windows Azure Platform

The Windows Azure platform is a set of cloud computing technologies, each providing a specific set of services to application developers. Microsoft claims that the set of provided NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 185 of 191

services can be used together or independently to enable Developers to use their existing skills to develop cloud applications, System Integrators to more cost effectively provide services to their customers and Businesses of all sizes to quickly respond as business needs change.

It aims to reduce the effort that a Developer has to spend on managing operational resources, which can focus on building cloud applications having business value.

The main benefits that Windows Azure platform offers to the developers are:

Run commodity processes in the cloud Build, modify, and distribute scalable applications with minimal on-premises resources Perform large-volume storage, batch processing, intense or large-volume computations Create, test, debug, and distribute Web services quickly and inexpensively

The main components of the Windows Azure platform are:

Windows Azure, provides a Windows-based environment for running applications and storing data on servers in Microsoft data centers SQL Azure, provides data services in the cloud based on SQL Server .NET Services, offers distributed infrastructure services to cloud-based and local applications

Windows Azure is a cloud services operating system that serves as the development, service hosting and service management environment for the Windows Azure platform. All the services and application built using this technology run on top of Windows Azure which acts as a runtime for them. As it runs on a large number of machines, it provides a fabric that unifies the whole. On top of the fabric are built the compute and the storage services; the compute service is responsible for the computation environment while the storage service is responsible for providing scalable storage for large scale needs.

The goal of SQL Azure is to offer a set of cloud-based services for storing and working with many kinds of information. Microsoft claims that SQL Azure will eventually include a range of data-oriented capabilities, including reporting, data analytics, and others, even though the first SQL Azure components announced so far are SQL Azure Database, providing a database management system (DBMS) in the cloud, and ―Huron‖ Data Sync, that synchronizes relational data across various on-premises DBMSs.

The goal of .NET Services is to provide cloud-based infrastructure services that can be used by either on-premises applications or cloud applications. Indeed, running applications and storing data in the cloud are only a part of the story, and .NET services aims to fill the existing gap.

References 1. Windows Azure Platform, http://www.microsoft.com/windowsazure

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 186 of 191

4.3.14 WSMX WSMX [1](Web Service Modelling eXecution environment) is the reference implementation of WSMO (Web Service Modelling Ontology). It is an execution environment for dynamic discovery, selection, mediation and invocation of semantic web services. Its internal language is WSML (Web Service Modelling Language). WSMX environment is mainly developed by DERI Galway[2] and STI Innsbruck [3]. WSMX can achieve a user‘s goal by dynamically selecting a matching web service, mediating the data that needs to be communicated to this service and invoking it. By implementing an environment supporting service requester goals and goal driven discovery and invocation, WSMX enables service requesters and providers to come together to achieve specific tasks even when these service requesters and providers are not aware of each other in advance and may have significant differences in their data and public behaviour models. In the design process of WSMX several steps were taken, including designing an architecture, describing a conceptual model and specifying the execution semantics. Execution semantics is the formal specification of the operational behaviour of a system. It allows Web Services whose semantics have been formally described to be discovered, selected, mediated and invoked to carry out specific client tasks. Semantics, in this context, is the meaning of various aspects of Web Services that allow machines to automatically carry out tasks using Web Services with a minimum or no human intervention. The goal of WSMX is to provide a flexible environment for application and business integration based on strongly decoupled physical components with strong mediation services enabling every party to speak with each other as advocated in WSMF. Here after the architecture specification [4] (under development) of the system:

WSMX is developed in an open and participatory open source environment. WSMX is also

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 187 of 191

a reference architecture of OASIS Semantic Execution Environment (SEE) Technical Committee.

References

1. http://www.wsmx.org:8080/wsmxsite/ 2. http://www.deri.ie/ 3. http://www.sti2.at/ 4. http://www.wsmo.org/TR/d13/d13.4/v0.3/

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 188 of 191

APPENDIX A: OTHER STANDARDS AND INITIATIVES This section provides a list of standards, technologies and initiatives concerning to the SOA world and that NEXOF-RA is aware of but they are not included in the census provided into the previous sections. They are classified according to the domain of application or the organization that has developed the standard/technology/initiative.

Business Process Modeling / Workflows

− Apache ODE − JaWE − Shark − JPEd − WfMOpen

Open Grid Forum OGF has a number of working groups with a focus on Storage and Data

− Data Format Description Language (dfdl) − Database Access and Integration Services (dais) − Grid File System (gfs) − Grid Storage Management (gsm) − GridFTP (gridftp) − Info Dissemination (infod) − OGSA ByteIO (byteio) − OGSA Data Movement Interface (ogsa-dmi) − OGSA Data (ogsa-d) − Storage Networking (sn)

Data Management Forum The Data Management Forum has a number of Initiatives:

− Data Protection Initiative − Information Lifecycle Initiative − Long Term Archive and Compliance Storage Initiative

Servers, Grids and Virtualisation

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 189 of 191

Server virtualisation is currently characterised by a number of proprietary approaches. Major players in this area are:

− VMWare − Xen, both an open source project (Xen.org) and offered commercially by Citrix, following their acquisition of XenSource.

− Microsoft − Sun Microsystems DMTF has recently accepted a submission from Dell, HP, IBM, Microsoft, VMWare and XenSource for the Open Virtual Machine Format. This allows combination of a number of different virtual machine types. It specifies a XML wrapper containing installation and configuration parameters for the virtual machines. OGF has a number of working groups with a focus on the Compute area:

− Grid Resource Allocation Agreement Protocol (graap) − Grid Scheduling Architecture (gsa) − Job Submission Description Language (jsdl) − OGSA Basic Execution Services (ogsa-bes) − OGSA High Performance Computing Profile (ogsa-hpcp) − OGSA Resource Selection Services (ogsa-rss)

Networks Local area networking is addressed by IT standards including DMTF. Telemanagement Forum broadens the scope to include networks operated by communication service providers. The range of standards and specifications in wide-area networking is very large and diverse, including both wired and wireless technologies. Some of the more relevant here include:

− Next Generation Networks (ETSI TISPAN, ITU-T, 3GPP). NGN is a complex set of standards dealing with the new wide-area network infrastructures that are being deployed by a number of network providers around the world. Possibly most relevant here is Service Oriented Interconnection (SoIx) which supports linking of NGN domains to provide defined levels of interoperability.

− Virtual Private Networks: IPSec and Multi-Protocol Label Switching are broadly representative of two approaches to virtual private networking. These are based on a large number of specifications from the IETF. OGF has some activity on networks which addresses the interface between computing applications and networks:

− Grid High-Performance Networking (ghpn) − Network Measurements (nm)

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 190 of 191

Infrastructure Management

− Management Using Web Services (WSDM-MUWS). MUWS is specified by OASIS as part of Web Services Distributed Management. It defines resource manageability interfaces.

− Management Of Web Services (WSDM-MOWS). MOWS is specified by OASIS as part of Web Services Distributed Management. It defines approaches to the management of Web Service endpoints using WS protocols. References 1. DMTF Common Information Model: http://www.dmtf.org/standards/cim 2. Telemanagement Forum Shared Information/Data Model: http://www.tmforum.org 3. VMWare: http://www/vmware.com 4. Xen: http://xen.org 5. Citrix Xen Server: http://www.citrixxenserver.com 6. Microsoft Virtualization: http://www.microsoft.com/virtualization 7. Sun Virtualisation: http://www.sun.com/datacenter/consolidation 8. Storage Networking Industry Association: http://www.snia.org 9. Data Management Forum: http://www.dmforum.org 10. ETSI TISPAN: http://www.tispan.org 11. ITU NGN Focus Group: http://www.itu.int/ITU-T/ngn/fgngn/index.html 12. 3GPP: http://www.3gpp.org 13. Internet Engineering Task Force (IETF): http://www.ietf.org 14. OASIS: http://www.oasis-open.org 15. ITIL: http://www.itil-officialsite.com/ 16. World Wide Web Consortium: http://www.w3.org 17. Open Grid Forum: http://www.ogf.org

NEXOF-RA • FP7-216446 • D7.1c • Version 1.0, dated 09/12/2009 • Page 191 of 191