Privacy and Identity Management in Europe for Life

Second Report on Standardisation and Interoperability

Overview and Analysis of Open Source Initiatives

Combined deliverable: Merger of H 3.3.2 and D 3.4.2.

Editors: Patrik Bichsel (IBM), Thomas Roessler (W3C) Reviewers: Benjamin Kellermann (TUD), Katalin Storf (ULD) Identifier: H3.3.2 / D3.4.2 Type: Deliverable Class: Public Date: 28 February 2010

Copyright © 2008, 2009, 2010 by the PrimeLife Consortium The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 216483. Members of the PrimeLife Consortium

1. IBM Research GmbH IBM Switzerland

2. Unabhängiges Landeszentrum für Datenschutz ULD Germany

3. Technische Universität Dresden TUD Germany

4. Karlstads Universitet KAU Sweden

5. Università degli Studi di Milano UNIMI Italy

6. Johann Wolfgang Goethe - Universität Frankfurt am Main GUF Germany

7. Stichting Katholieke Universiteit Brabant TILT Netherlands

8. GEIE ERCIM W3C France

9. Katholieke Universiteit Leuven K.U.Leuven Belgium

10. Università degli Studi di Bergamo UNIBG Italy

11. Giesecke & Devrient GmbH GD Germany

12. Center for Usability Research & Engineering CURE Austria

13. Europäisches Microsoft Innovations Center GmbH EMIC Germany

14. SAP AG SAP Germany

15. Brown University UBR USA

Disclaimer: The information in this document is provided "as is", and no guarantee or warranty is given that the information is fit for any particular purpose. The below referenced consortium members shall have no liability for damages of any kind including without limitation direct, special, indirect, or consequential damages that may result from the use of these materials subject to any liability which is mandatory due to applicable law. Copyright 2008-2010 by IBM Research GmbH, Unabhängiges Landeszentrum für Datenschutz, Technische Universität Dresden, Karlstads Universitet, Universitàdegli Studi di Milano, Johann Wolfgang Goethe - Universität Frankfurt am Main, Stichting Katholieke Universiteit Brabant, GEIE ERCIM, Katholieke Universiteit Leuven, Università degli Studi di Bergamo, Giesecke & Devrient GmbH, Center for Usability Research & Engineering, Europäisches Microsoft Innovations Center GmbH, SAP AG, Brown University.

2 Abstract

In this document we provide an overview of the standardisation and open source landscape related to privacy, trust, and identity management as of January 2010: We review technology platforms, and do a deep-dive on some particularly relevant technologies; we also survey the relevant standardisation organisations and survey open source projects relevant to the project. We further summarise the results already provided and still to be expected from the PrimeLife project, and put them into the perspective of this landscape. Thereby, we identify standardisation and open source opportunities for the project. This document also serves as a basis for further standardisation and open source strategy for PrimeLife. Indeed, PrimeLife values open source and open standards and has begun sharing its first results as open source software, public educational material and contributions to open standardisation bodies. These efforts are also explained in this document.

This document is an update of the earlier (merged) deliverable D3.4.1/D3.3.1. and as such contains D3.4.2 (First Report on Standardisation and Interoperability) and H3.3.2 (Overview and Analysis of Open Source Initiatives).

3 List of Contributors

Contributions from several PrimeLife partners are contained in this document. The following list presents the contributors for the chapters of this deliverable.

Chapter Author(s) Abstract Thomas Roessler (W3C) Introduction Jan Camenisch (IBM), Thomas Roessler (W3C) PrimeLife Results Patrik Bichsel (IBM), Jan Camenisch (IBM), Benjamin Kellermann (TUD), Gregory Neven (IBM), Franz-Stefan Preiss (IBM), Immanuel Scholz (TUD), Stefan Spitz (GD), Rigo Wenning (W3C) Target Technology Platforms Carine Bournez (W3C), Ulrich Pinsdorf (EMIC), Thomas Roessler (W3C), Rigo Wenning (W3C) Architectures and Frameworks Kai Rannenberg (GUF), Eduard De Jong (GD) Policy and Rule Languages Claudio Ardagna (UNIMI), Ulrich Pinsdorf (EMIC), Thomas Roessler (W3C), Pierangela Samarati (UNIMI), Mario Verdicchio (UNIBG), Rigo Wenning (W3C) Authentication Infrastructure Jan Camenisch (IBM), Markulf Kohlweiss (K.U.Leuven), Thomas Roessler (W3C), Stuart Short (SAP), Karel Wouters (KU Leuven) User Control Infrastructure Eduard de Jong (GD), Robert Mueller (GD) Identity Systems Michele Bezzi (SAP), Patrik Bichsel (IBM), Jan Camenisch (IBM), Ulrich Pinsdorf (EMIC), Thomas Roessler (W3C), Stuart Short (SAP) Specification Developing Michele Bezzi (SAP), Carine Bournez (W3C), Eduard de Jong Organisations (GD), Robert Mueller (GD), Ulrich Pinsdorf (EMIC), Kai Rannenberg (GUF), Thomas Roessler (W3C), Rigo Wenning (W3C) Applications Filipe Beato (K.U.Leuven), Patrik Bichsel (IBM), Carine Bournez (W3C), Joeri De Ruiter (TILT), Stefanie Pötzsch (TUD), Thomas Roessler (W3C), Immanuel Scholz (TUD), Rigo Wenning (W3C)

This deliverable was rendered from HTML pages using Prince XML from YesLogic Pty Ltd. YesLogic has donated a license of Prince XML to W3C.

4 Table Of Contents

1 Introduction 9 1.1 Historic Background ...... 10 1.2 PrimeLife Project Overview ...... 11 1.3 PrimeLife’s Approach to Privacy ...... 11 2 PrimeLife Results 13 2.1 Results available from PrimeLife ...... 13 2.1.1 Activity 1 - Privacy Life ...... 13 2.1.2 Activity 2 - Mechanisms ...... 15 2.1.3 Activity 4 - User Interfaces ...... 19 2.1.4 Activity 5 - Policies ...... 22 2.1.5 Activity 6 - Infrastructures...... 22 3 Target Technology Platforms 24 3.1 The Web ...... 24 3.1.1 Architecture ...... 24 3.1.2 Standards ...... 25 3.1.3 Evolving Web Application Development Paradigms - Web 2.0 ...... 27 3.1.4 Semantic Web ...... 28 3.1.5 Privacy technologies for the Web ...... 29 3.2 Service Oriented Architectures ...... 29 3.2.1 OASIS WS-Security ...... 30 3.2.2 OASIS WS-SecureConversation ...... 31 3.2.3 OASIS WS-Trust ...... 32 3.2.4 W3C WS-Policy...... 33 3.2.5 OASIS WS-SecurityPolicy ...... 33 3.2.6 Primelife Perspective ...... 33 4 Specification Developing Organisations 34 4.1 W3C ...... 34 4.1.1 Overview ...... 34 4.1.2 W3C and PrimeLife Workshop on Access Control Application Scenarios . . . . 35 4.2 ISO TMB Privacy Steering Committee ...... 38 4.3 ISO/IEC JTC 1 "Information technology" ...... 39 4.3.1 ISO/IEC JTC 1/SC 27/WG 5 "Identity Management and Privacy Technologies" . 39 4.3.2 ISO/IEC JTC 1/SC 17/WG 4 ...... 41 4.3.3 ISO/IEC JTC 1/SC 17/WG 11 ...... 41 4.3.4 ISO/IEC JTC 1/SC 37 ...... 41 4.4 OASIS...... 42 4.5 Kantara Initiative and Liberty Alliance ...... 42 4.5.1 Kantara Initiative ...... 42 4.5.2 Liberty Alliance ...... 43 4.6 IETF ...... 44 4.7 TCG...... 44 5 Architectures and Frameworks 46 5.1 Identity Management ...... 46 5.1.1 Identity Management Framework (24760) ...... 46 5.1.2 A Framework for Access Management (29146) ...... 47 5.1.3 Entity authentication assurance framework (29115) ...... 47 5.2 Privacy ...... 47 5.2.1 Privacy Reference Framework (29100)...... 47

5 5.2.2 Privacy Reference Architecture (29101) ...... 48 6 Policy and rule languages 50 6.1 eXtensible Access Control Markup Language (XACML) v3.0 ...... 50 6.1.1 Basic XACML concepts ...... 52 6.1.2 XACML 3.0: Privacy profile ...... 53 6.1.3 Current status of the XACML proposal...... 54 6.1.4 Relations to other proposals and to the PrimeLife project...... 54 6.2 The (RIF) ...... 55 6.2.1 RIF Dialects ...... 55 6.2.2 Use Cases...... 56 6.2.3 Relations to PrimeLife ...... 57 6.3 ...... 57 6.3.1 Status ...... 58 6.3.2 Conclusion ...... 58 6.4 APPEL ...... 59 6.4.1 Shortcomings ...... 59 6.4.2 Conclusion ...... 60 6.5 Enterprise Privacy Authorisation Language (EPAL) ...... 60 6.5.1 Structure of an EPAL policy ...... 60 6.5.2 An EPAL policy example ...... 60 6.5.3 An EPAL query example...... 62 6.5.4 A typical EPAL scenario ...... 62 6.5.5 Status of the EPAL proposal ...... 62 6.5.6 Relations to other standards and to the PrimeLife project...... 62 6.6 CARML ...... 63 6.6.1 Elements of the CARML language ...... 63 6.6.2 Status of the CARML proposal ...... 64 6.6.3 AAPML ...... 65 6.6.4 Relations to PrimeLife ...... 65 6.7 Identity Governance Framework ...... 65 6.8 PRIME Policy Languages...... 65 6.8.1 Rough use cases ...... 66 6.8.2 Distinctive features of PRIME languages ...... 67 6.8.3 Relation to standards ...... 68 6.8.4 Relation to PrimeLife...... 69 6.9 WS-Policy...... 69 6.9.1 Structure of a policy...... 69 6.9.2 Normal form of a policy ...... 70 6.9.3 Compact form of a policy ...... 70 6.9.4 Nested policies...... 70 6.9.5 References to other policies...... 71 6.9.6 Intersection of policies...... 72 6.9.7 Relations to other proposals and to the PrimeLife project...... 73 6.9.8 Status of the WS-Policy proposal ...... 74 6.10 Rein...... 74 6.10.1 Relations to other standards and to the PrimeLife project...... 74 6.11 OASIS WS-XACML ...... 74 6.12 Security Policy Assertion Language (SecPAL) ...... 75 6.13 PrimeLife Policy Engines ...... 75 6.13.1 First PrimeLife Policy Engine ...... 76 6.13.2 Second PrimeLife Policy Engine...... 76 7 Authentication Infrastructure 78

6 7.1 The ITU-T X.509 Standard...... 78 7.1.1 X.509 Certificate and Certification Process...... 79 7.1.2 Evolution of the X509 standard ...... 80 7.1.3 PrimeLife Impact ...... 81 7.2 PKIX ...... 82 7.2.1 X.509 Attribute Certificate and Privilege Management Infrastructure ...... 82 7.3 XML Signature ...... 83 7.3.1 Specification Overview ...... 83 7.3.2 Status ...... 84 7.3.3 PrimeLife Impact ...... 84 7.4 OAuth ...... 85 7.4.1 Protocol flow ...... 86 7.4.2 Trust and privacy properties ...... 87 7.4.3 Specification development...... 87 7.4.4 Open Source Implementations...... 88 8 User Control Infrastructure 89 8.1 Smart Cards: User-controlled Token for Privacy Protection ...... 89 8.1.1 Standardisation...... 91 8.1.2 Architectures ...... 92 8.1.3 Anonymous Credentials on Java Cards ...... 92 8.1.4 Strategy and Actions ...... 92 8.2 Biometrics Standardisation and Privacy ...... 93 8.2.1 Biometrics Standardisation ...... 93 8.2.2 Architectures ...... 94 8.2.3 Strategy and Actions ...... 94 9 Identity Systems 95 9.1 OpenID ...... 95 9.1.1 Background ...... 95 9.1.2 Protocol Flow...... 95 9.1.3 Message Formats ...... 96 9.1.4 Trust and Privacy Properties ...... 97 9.1.5 Specification Development ...... 97 9.1.6 Open Source Implementations...... 97 9.1.7 PrimeLife Perspective on OpenID...... 98 9.2 Higgins ...... 98 9.3 Windows CardSpace...... 100 9.4 Information Card Foundation ...... 101 9.5 Open-Source Identity System (OSIS)...... 101 9.5.1 Specification development...... 101 9.5.2 Open Source Interoperability Workshops ...... 102 9.6 WS-Federation ...... 102 9.7 SAML ...... 103 9.7.1 Background ...... 103 9.7.2 Architecture ...... 103 9.7.3 Protocol Flow...... 104 9.7.4 Open Source Implementations...... 104 9.8 Liberty Identity Federation ...... 104 9.8.1 History and relationship with SAML...... 105 9.8.2 Liberty profiles...... 105 9.8.3 Profiles of the Single Sign-On and Federation Protocol ...... 105 9.8.4 Single Sign-On Protocol Flow Example: Liberty Artifact Profile...... 106 9.8.5 Liberty and CardSpace...... 106

7 9.8.6 PrimeLife and Liberty ...... 106 9.8.7 Open Source Implementations...... 106 9.9 Pamela Project ...... 107 9.10 Yadis ...... 107 9.10.1 Protocol flow ...... 107 9.10.2 The Yadis document ...... 108 9.10.3 Trust and privacy properties ...... 108 9.10.4 Specification development...... 108 9.10.5 Open Source Implementations...... 109 9.10.6 Opportunities for PrimeLife...... 109 10 Applications 110 10.1 phpBB ...... 110 10.2 MediaWiki ...... 111 10.3 Elgg...... 111 10.4 Firefox Plugins ...... 112 10.4.1 Identity Management and Formfiller Enhancement ...... 112 10.4.2 Privacy Enhancement...... 113 10.4.3 Trust Enhancement ...... 114 10.4.4 Other Firefox Plug-ins ...... 114 10.4.5 Opportunities for PrimeLife...... 115 10.5 MozPETs: Mozilla Privacy Enhancement Technologies ...... 116 10.6 Noserub...... 117

8 Chapter 1 Introduction

This is PrimeLife’s second report on standardisation and interoperability. The first report was published just a few months into the project’s life; a third report will be due at its end. This report is therefore an interim snapshot of the project’s view on the technological and standards landscape at this point of time. As was the case for the first report, we have merged two deliverables into one: Beyond standards, this document looks at relevant open source projects, and PrimeLife’s possible contributions to these projects.

The material presented here is largely an update of what was contained in the first report, brought to the current state where the technical development has moved on, and augmented where PrimeLife has taken up activities in the standards space.

During its first two years, the project was able to make a number of fruitful connections into the standards world:

• Together with W3C, the project organized a workshop on access control application scenarios in late 2009. One of the workshop’s co-chairs is also a co-chair of the OASIS XACML Technical Committee. Contributions to the workshop demonstrated this project’s own work in the access control space (with results from activities 2, 5 and 6) to a broader audience. The MASTER, SWIFT and TAS3 EU projects and the UK-funded ENCORE project joined us at the workshop, as did players from other parts of the research and industry community. The workshop report forms part of this deliverable. It will be the basis for the project’s further standards strategy in this space. While this workshop focused on basic access control scenarios, we anticipate that a second W3C and PrimeLife workshop will focus on data handling policies and obligation management. • The W3C Policy Languages Interest Group (PLING) continues to be a forum for coordination of issues in the policy language space. It is a communication platform for project partners and other players in the industry. As all W3C groups, it is vendor-neutral. • In 2008, the project established a formal liaison with ISO/IEC JTC 1/SC 27/WG 5 "Identity Management and Privacy Technologies". This liaison leads to mutual impact. Four project partners (KS, GD, GUF, ULD) participate in WG 5.

9 • An expert from PrimeLife partner GD participates in ISO/IEC JTC 1/SC 17/WG 4 on smart card standards.

The project is very dedicated to contribute to open source as well as using open source software to build privacy extended systems. Currently, the following contributions are planned:

• PRIME Core which is meant to close the gap between a cryptographic library offering privacy features and applications willing to use those features, in an even improved version. • Identity Mixer, an anonymous credential system implementation, has already been made available under an open source license at https://prime.inf.tu- dresden.de/idemix/. • Prototypes extending MediaWiki with privacy enhanced access control and a privacy-friendly incentive system. • Clique, which is an extension of the open source social network Elgg. It provides fine grained data release policies as well as allows a user to establish different faces within the social network. Those faces correspond to the different appearances people show towards different communities (e.g., group of friends, business partners) in real life. • Scramble!, a Firefox plug-in that allows for encryption of data that is to be stored at untrusted providers. When encrypting the data a user decides upon who is allowed to access it. The release of the plug-in to the open source community is assumed to happen shortly. • Extensions to phpBB, which tackle the issues of (1) the privacy awareness of users and (2) the means of a contributor to limit the access to her contributions. • The event scheduling system Dudle, which is a privacy extended version of the well- known service http://www.doodle.com/. PrimeLife developed a privacy enhanced version Dudle (http://dudle.inf.tu-dresden.de/) that will be released under an open source license. 1.1 Historic Background

Hidden data collection and the slow erosion of people’s privacy led to the start of the P3P work by W3C in 1997. Using P3P, a service can make statements about its privacy practices in both human and machine readable form to support the individual using the service in making informed decisions. Informed decisions by individuals and procurers were also the aim of ISO/IEC’s work on IS 15408 “Evaluation Criterial for IT Security”, that started in 1991 and was enhanced by a “Privacy” Class covering Anonymity, Pseudonymity, Unlinkability, and Unobservability, before it was for the first time published in 1999.

With the advent of “Web 2.0”, the Web has become more interactive. Interactive services are built around online communities or offer personalised services. Exchanges of values, ideas and information require a certain level of trust or reputation. In some cases, access to the information is regulated. Currently, a number of initiatives in open source and standardisation exist: OpenID, CardSpace, Higgins, Liberty Alliance (now Kantara), OSIS, as well as language driven ones like FOAF, XACML or SAML. Mostly, the goal is to provide the notion of “Identity” across several completely decentralised services and add the necessary hooks for services to manage the relationship to its users. Often, the frameworks and languages are limited to a specific technology or service. Privacy or security are ignored or only minimally addressed.

10 As a consequence, identity management today consists of isolated initiatives. Industrial solutions provide control schemes without privacy support. The current standardisation efforts try to federate several solutions. Such federation, in turn, makes the application of Privacy- Enhancing Technologies (PET) even harder as the semantics must participate in the interoperability in order to survive the federation. The PRIME project [PRIME] demonstrated the use of “sticky policies” to transport data protection information with the personal information itself. PRIME also demonstrated how the user may influence or even control the data processing after the transmission of personal data (user-centric approach). The MIT- based projects TAMI and PAW showed that data found on the Web can be used to create hooks for data protection decisions, access control and other constraints on the use of data. This work allows opening up the constraint of data use from a strict user-input driven scheme to larger community considerations and enhancing the machine-processing capabilities at users’ hands. Transparency of data processing is the preferred approach. 1.2 PrimeLife Project Overview

Individuals in the Information Society want to protect their autonomy and retain control over personal information, irrespective of their activities. Information technologies hardly consider those requirements, thereby putting the privacy of the citizen at risk. Today, the increasingly collaborative character of the Internet enables anyone to compose services and to contribute and distribute information. Individuals will participate in this collaboration throughout their life, leaving a life-long trail of personal data.

This raises substantial new privacy challenges: A first challenge is how to protect privacy and at the same time establishing trust in emerging Internet applications such as user contributed content, collaborative scenarios, and virtual communities. A second challenge is how to maintain life-long privacy.

PrimeLife will resolve the core privacy and trust issues pertaining to these challenges. Its long-term vision is to counter the trend to life-long personal data trails without compromising on functionality. We will build upon and expand the sound foundation of the FP6 project PRIME that has shown how privacy technologies can enable citizens to execute their legal rights to control personal information in on-line transactions.

Resolving these issues requires substantial progress in many underlying technologies. PrimeLife will substantially advance the state of the art in the areas of human computer interfaces, configurable policy languages, Web service federations, infrastructures and privacy-enhancing cryptography.

Privacy in a digital scenario, however, not only requires protection on the application layer. Contrarily, the lower layers (in particular, data link and transport layer) must guarantee privacy as well. Given that there are already solutions such as TOR [TOR], Privoxy [Privoxy] or I2P [I2P], PrimeLife will keep its focus entirely on the higher layers and not participate in any of those projects. 1.3 PrimeLife’s Approach to Privacy

We envision that users will be able to act and interact electronically in an easy and intuitive fashion while retaining control of their personal data throughout their lifes. Users might use a multitude of different means to communicate with several partners employing a variety of platforms. For instance, a user Alice might authenticate to an on-line service created by a

11 mash-up. She automatically logs on using her laptop and later confirms a payment transaction for an electronic article using her mobile phone. In all those transactions, despite many potentially untrusted services collaborate in the mash-up, Alice is able to reveal only the minimally necessary information to establish mutual trust and conduct the transaction. For instance, no service will learn any personal information about Alice. Nevertheless, a merchant is guaranteed payment for the services.

In other words, privacy needs to be addressed in a user centric way, i.e., in a way where the users are given control over their data. This requires that

1. the users be informed what data about them is requested (it might be that the data needs to be certified by a third party) and how that data is going to be used; and that 2. the users be provided with technologies that allow them to conduct transactions in such a way that only the necessary information needs to be revealed.

The first item requires that access control is done in an attribute-based way. Other than an access control list (ACL) that lists which users are allowed to access, the attribute-based access control defines for each item what attributes (requirements, credentials) a requester needs to satisfy to get access to a resource. Next, access control needs to be done such that the user is not authenticated with respect to a user ID (and then checking whether that user has the required attributes) but rather by allowing access to any user who can provide proof that the attributes are satisfied. For this to work, various languages for policies, ontologies, credential and attribute formats are needed to communicate all these data. Moreover, the users need to be given intuitive user interfaces that guide them through such authorisation procedures. Moreover, users should be able to investigate which data will be sent to which partners at what time and thereby maintain a profile (or partial identity) with each communication partner.

Let us discuss the second item. Some of the technologies we have already mentioned: access control mechanisms, various languages and components to work with them (e.g., editors, evaluation engines, ...), and user interfaces. In addition, we require anonymous (or private) credentials. These are essentially public key certificates that certify attributes of users (e.g., an electronic version of a driver’s license). Specifically, they allow users to control what information contained in the certificate shall be revealed at what time. For instance, using an anonymous driver’s license credential, user Alice could convince a bar tender that she is old enough to order a beer without disclosing any other information attributed to her in the license (in fact, she does not even have to reveal her birth date but only the fact that she was born a sufficient time ago to be old enough for that beer). This aspect of PrimeLife’s approach is taken over from the PRIME project and hence can draw on these results many of which are quite mature already. Therefore we deem this offers many opportunities for standardization and open source contributions.

However, the approach to reveal only the minimally necessary information does not address all the privacy problems arising. In many cases, e.g., social networks or wiki pages, the users indeed want to reveal personal information about them and also exchange these information. Moreover, as information is provided by many different parties, it becomes much harder to judge whether the information is trustworthy. Consequently, to establish trustworthiness, users will potentially have to reveal even more information about themselves to allow their communication partners to assess to what extent and with respect to what they can be trusted. PrimeLife aims to address these challenges as well. In this area, PrimeLife is conducting basic research. Therefore the results here are not yet general enough to merit standardization. However, PrimeLife is about to make a few contributions to open source communities that enable privacy for new Internet applications such as social networks and wiki-based sites.

12 Chapter 2 PrimeLife Results

This chapter briefly reviews the different activities of PrimeLife and their results, and discusses their relationship to relevant standards and open source initiatives. 2.1 Results available from PrimeLife

2.1.1 Activity 1 - Privacy Life

This activity's goal is to supply privacy-enabled identity management for the whole life of people. To this end, the activity will build research prototypes for the following real-life scenarios:

• Trusted Content: Supporting at least some user-centric control over a user’s personal data in situations where she wants to share personal information with several other people. • Selective Access Control in Social Networks: Adapting the user-centric privacy- enhancing identity management of bi- or at most trilateral settings to new technological as well as business settings. • Managing identity, trust and privacy throughout life: Sustainable Privacy and identity management from birth to death.

A focus will be to formulate appropriate use cases and scenarios, condense them into requirements, build an appropriate prototype that concentrates on the relevant aspects (otherwise both expenses as well as complexity explodes), learn the relevant lessons from building, using, and evaluating the prototype, without being stuck in what is irrelevant detail and shortcomings in the prototype.

While we will make these prototypes open source whenever possible and useful, we do not expect that the outcomes of this activity will reach a suitable maturity level for standardisation.

13 Prototypes

So far, two mechanisms have been studied. The first years´ prototype studied to what extent one can collect comments made by users on Web articles and display them to the user reading the original article. The conclusion here was that indeed such aggregation is helpful but that very often users (and in particular experts) lack the incentive to comment one articles they have read. Also, such comments are free form text and thus readers of an article need to read the comments as well to be able to judge the trustworthiness.

In the second year prototype, the work package has therefore studied and implemented two mechanisms which address these issues. The first one is an anonymous incentive systems where users can offer e-coins to other users for reviewing a given pages. After a review has been delivered, these coins can be collected by the reviewer and either be offered for a review of another pages or exchanged for some other goods. The second mechanism is a reputation and rating system that allows users to rate a page. Such ratings enable users to quicker assess the quality of a page. Moreover, the system has a feedback mechanism trough which raters can collect reputation for their reviews such that later on their reputation can be used to weight their ratings. At this stage, the work package is looking into what kinds of ratings and reputation are most usable to users in terms of transparency and ease of understanding and use.

PRIME Core

A number of the prototypes of Activity 1 are based on the integrated prototype produced by the PRIME project. The idea of the PRIME core is to close the gap between a library, providing anonymous credential primitives, and an application which needs easy to use high- level functions and a credential infrastructure. The integrated prototype of the PRIME project was primarily aimed to support server-client scenarios, i. e., the e-Shopping scenario. As the time for user testing was limited, the interfaces were hard to use. In its first version the framework's implementation was not intended to be used by a broad open source community. Within the first project year of PrimeLife, effort was made to overcome these issues. Therefore, a lot of code restructuring has been done and the interfaces have been simplified. Two prototypes demonstrate the usability of the PRIME core with respect to easy implementability and usability for new scenarios. The first one is the Collaborative Workspace prototype, which includes an access control extension to MediaWiki as well as an access control extension for phpBB. The second prototype is a reputation system for MediaWiki.

Privacy Dashboard Extension

An open source browser extension is being developed as a privacy dashboard that will enable users to manage their identities, credentials, and privacy preferences, and enable them to track disclosures of personal data to websites. This is being realized as an add-on for the Firefox browser and will be made available for public distribution.

Scramble! Firefox Extension

We produced the Firefox extension Scramble! which provides a mechanisms for users to enforce access control over their own data. Its main target is to protect users from sharing sensitive information with Social Network Site (SNS) providers. It enables users to create

14 groups of users and handles decryption transparently. In addition, Scramble! enforces access rights by means of encryption.

Scramble! uses OpenPGP to encrypt data provided by a user. In addition, the user provides a group (i.e., a list of at least one user) that is authorised to read the given data. Consequently, it uses an existing PKI infrastructure and key management model, which even allows for "broadcast" encryption for multiple recipients. GnuPG also allows to perform anonymous recipient encryption by omitting the public key IDs from the encrypted blob, but it is still vulnerable to active attacks. Apart from storing large amounts of encrypted data in an SNS, it is also possible to only list "tiny url" snippets that refer to the encrypted data on a third party server. In this way, the problem of the large ciphertext size (it currently grows linear in the number of users that are granted access) is minimised. In addition, the visual contamination of the SNS platform can be avoided.

Clique

Clique is an modification of the Elgg social network platform. Clique provides users with a social network platform that is privacy oriented. This includes, for example, fine grained access control and multiple 'faces'. The fine grained access control thereby refers to the possibility of a user to define for each post which users (or groups of users) can see that post. The faces enable a user to present herself differently towards different audiences, which directly reflects the behaviour in real life. phpBB extension

The open source platform phpBB is extended with privacy-enhancing access control based on policies and credentials. This way, phpBB users are enabled to restrict the access to an element of the content structure owned by the user (which could be the forum itself, topics, threads, and individual posts) to users proving the possession of indicated properties certified by (an) indicated party/ies. That way, the users are not obliged to rely on user accounts (provided by phpBB) when restricting access. This approach opens up potential to foster social contacts without requesting any kind of managing identity data with the application platform. So, it allows to decouple functionality of identity management from actual application functions.

The perception of privacy in social settings depends on the anonymity or identifiability of the users on the one hand, and on the available audience, i.e., who may note the disclosed personal data, on the other hand. We developed a privacy-awareness module that helps users to assess their level of privacy while interacting with others on the Internet and enable them to make informed decisions whether to disclose personal data in a phpBB forum.

2.1.2 Activity 2 - Mechanisms

The objective of Activity 2 is to perform basic research addressing the complex tool-related problems of guaranteeing privacy and trust in the electronic society. The results of the activity will be research findings advancing the state of the art of current technologies and solutions. Proof-of-concepts prototypes implementing novel techniques will also be developed, therefore producing tools that can be used by other activities or even be made available publicly.

15 The development of privacy-enhancing mechanisms and techniques will in particular focus on the following key problems:

• Development of cryptographic solutions for protecting users’ credentials, identity management transactions; This includes various extensions of anonymous credential that will potentially be implemented as part of Identity Mixer as well as mechanisms to store credentials securely. • Development of metrics for expressing privacy compliance, trust and utility of released data; • Identification of privacy threats due to re-identification and correlation in data collection and of means to assess protection/exposure against them; • Definition of novel approaches for empowering the users, enabling them to acquire control over accesses and uses of their own data.

The results of this activity will influence the work of Activity 5 in terms of requirements and/ or new policy mechanisms. We expect that a number of mechanisms produced by this activity can be made available as open source and that some of them will be relevant to standardisation bodies (either directly or in conjunction of the results' use in other activities).

Identity Mixer

The IBM Identity Mixer [Idemix] system developed by IBM's Zurich Research Laboratory is an implementation of the Camenisch-Lysyanskaya anonymous credential system. A credential system consists of users and organisations. Organisations know the users only by pseudonyms. Different pseudonyms of the same user cannot be linked. Yet, an organisation can issue a credential to a pseudonym and the corresponding user can prove possession of this credential to another organisation (who knows this user by a different pseudonym) without revealing anything more than the fact that such a credential is in this user's possession. Credentials can be set for unlimited use (these are called multiple-show credentials) and for one-time use (these are called one-show credentials). Possession of a multiple-show credential can be demonstrated an arbitrary number of times; these demonstrations cannot be linked to each other.

Apart from the pure cryptographic operations of the Camenisch-Lysyanskaya credential system, Identity Mixer also implements the high-level protocols and policies to deal with credentials (issue and presentation) as well as user interfaces to select credentials upon a request for information. The first version of the Identity Mixer software was developed by IBM Research about 6 years ago and has been subject to continuous improvement ever since, in particular in the context of the PRIME project. All parts of Identity Mixer except for the cryptographic operations have been made available as open source within the framework of the Higgins project. The cryptographic parts are available at https://prime.inf.tu- dresden.de/idemix/.

Anonymous credentials are a core element to enable privacy enhancing identity management. The Identity Mixer system will be maintained and extended through the PrimeLife project. While contributing Identity Mixer to the open source community is an ongoing effort, not many aspects of it (nor anonymous credentials in general) have been standardised. Relevant aspects include token formats, wire formats, cryptographic algorithms, claim languages and the API for a credential system.

16 Privacy Preserving Secure Log System

The privacy preserving log system is an extension of a secure log system to handle privacy aspects, primarily unlinkability of log records and anonymous access to log entries. The purpose of the log is to make it possible for data subjects to track how there data has been handled by the services side (i.e., to enhance the transparency of data access and processing) without causing data leakage or addition of new personal data in the process. The log is currently integrated in the PRIME core but can easily be ported to any system given that certain security requirements are met. Some work still remains regarding the presentation of the data in the log, performance evaluation and the tasks of deciding exactly where in the PRIME core to generate log events for optimal coverage of data access and processing. A first attempt on a user friendly log presentation view will be done together with task 4.4. The aim is that the view will be part of the DataTrack.

Multilateral secure interoperable reputation systems

We are building two types of systems that emphasise different aspects of multilateral security for reputation systems.

As first scenario/infrastructure we chose to test a multilateral secure reputation system for rating interactors is a centralised community system where interactions between members take place via a central server where they are stored and globally available. We chose the web forum software phpBB for our implementation. The reputation system we designed and implemented uses global reputations that are stored at the users' device to give him control over personal data including his reputation. The reputation is usable in different phpBB fora what makes the system interoperable between different communities.

As second scenario/infrastructure we chose to test a multilateral secure reputation system when rating arbitrary web content in the Web 2.0. We choose to store every rating given to a content as meta data together with the web content itself. Therefore, the reputation of a content is given by the set of ratings available as meta data with the content. By using RDF, and the possible application to arbitray web content, the system is intoperable and independent from concrete applications.

Privacy-Awareness Panel

The privacy-awareness panel is designed as a so-called mod (short for modification, a usual concept for extensions of the original) for the open source forum software phpBB. It allows to display privacy-relevant information to the user, e.g., which audience may access her contribution to the forum and what additional information the provider gets to know. The perception of privacy in social settings depends on the anonymity or identifiability of the users on the one hand, and on the available audience, i.e., who may note the disclosed personal data, on the other hand. The privacy-awareness panel helps users to assess their level of privacy while interacting with others on the Internet and enable them to make informed decisions whether to disclose personal data in a phpBB forum.

Privacy-Enhanced Event Scheduling

For years, event scheduling has been done with applications like Microsoft Exchange/IBM Lotus Notes or some central iCal-server within the intranet. Since Web 2.0 applications became very popular, an application named Doodle became popular as well, which offers

17 event scheduling in a very easy way. In this solution, some initiator configures an online poll, where everybody's vote is shown to all visitors of the web page.

In contrast to the intranet solution, Doodle offers its event scheduling service to everyone. However, both solutions, the intranet solution and Doodle, share that everybody has to publish his availability pattern. In contrast to a solution running in some company, within the Doodle-solution, the availability patterns are visible to the whole world.

We build an event scheduling solution, in which people are able to agree to a common time slot without revealing their availability pattern to somebody else. Through cryptographic mechanisms, the system will work without the need of a trusted third party. In addition to privacy-friendlyness, everybody is able to verify, that no other user cheated without the need of reavealing his anonymity.

The system is developed and licensed under an open source license. It is available at http://dudle.inf.tu-dresden.de.

Selective access control to outsourced data

As part of the activity about selective access control in social networks, the client-server web application developed by UNIMi and UNIBG, enforces and extends the data outsourcing architecture proposed in [DFJPS07] to support users in the specification of access restrictions to resources they wish to share with a desired group of other users, via an external storage provider. In this scenario, different data owners supply several groups of authorized users with different portions of their data, through an external service provider. Thus, the enforced mechanism couples authorizations and encryption to guarantee that only users in the specified group will be able to access the resources, which remain confidential to all the other parties, including the service provider directly supporting the exchange of the data.

In the enforced protocol, a single owner, before outsourcing her resources, encrypts them with a strong symmetric block cipher using different keys. Each user can compute a shared-key with the owner of the resource she wants to access, via a non-interactive protocol supported by the service provider; thereafter, she can derive all the keys of the resources she is entitled to access, leveraging on a set non-confidential information also managed by the service provider. The system exploits two cryptographic techniques: a key agreement method allows two users to share a secret key for subsequent cryptographic use; a key derivation method adopts the secret keys shared between pairs of users for allowing them to derive all keys used for encrypting resources that they are authorized to access. The combination of these two techniques defines an encryption policy that correctly enforces the collection of authorization policies chosen by each data owner and minimize the threat of a successful man-in-the-middle attack operated by the service provider.

The system is completed by a method for the management of the evolution of access control policies, imposing two layers of encryption on data and providing grant and revoke primitives to each data-owner. The inner layer of encryption is imposed by the data-owner for providing initial protection (when releasing the resource for outsourcing); the outer layer is imposed by the server to reflect dynamic policy modifications (without requesting the owner to re-encrypt the resources every time there is a change in the authorization policy). The two-layer encryption allows the owner to outsource, besides the resource storage, the authorization policy management, while not releasing data to the provider. Furthermore, the use of two encryption layers allows to minimizes the collusion threats between the provider and malicious users allowing to analyze the trust boundaries around each resource.

18 The developed client-server application enforces the confidential sharing of outsourced data demonstrating the efficiency of the proposed technique. The prototype provides a Mozilla Firefox plug-in which includes the client application logic, and an application server written in Java based on an Apache TomCat web server and a PostgreSQL database.

Fragmenting data for protecting privacy

Data outsourcing is emerging today as a successful paradigm allowing individuals and organizations to exploit external services for storing and managing huge data collections. A common practice for protecting outsourced data consists in encrypting the data. Recent proposals however have put forward the idea that there are situations where encryption may be an overdue and may not always be possible. These proposals guarantee the protection of outsourced data by combining encryption with fragmentation. The idea is that sensitive associations among data are protected by splitting them into two or more fragments while information that is sensitive per se is encrypted.

The tool developed by UNIBG and UNIMI has the goal of providing a support to the data owner in the fragmentation process. Given a set of confidentiality constraints representing the sensitive data or associations that need to be protected, the tool computes a fragmentation that satisfies the confidentiality constraints. The fragmentation algorithm departs from the use of encryption and exploits the availability of a (limited) trusted storage at the data owner side. The tool works under the assumption that the owner, while outsourcing the major portion of the data at one or more external servers, is then willing to locally store a limited amount of data. The owner-side storage, being under the owner control, is assumed to be maintained in a trusted environment. The tool produces two fragments: one stored at the data owner side and the the one stored at the external server. This permits to break sensitive associations by exploiting fragmentation only without using encryption. Since the trusted storage available at the data owner is limited in size and expensive (also because the data owner would be involved in the evaluation of users' queries), the tool has been designed to compute a fragmentation that minimizes the owner's workload. The tool supports different metrics for the evaluation of the quality of a fragmentation. In particular, it is offered the possibility to minimize: i) the number of attributes stored at the owner; ii) the size of the fragment stored at the owner; ii) the number of queries partially evaluated by the owner; iii) the number of conditions evaluated by the owner.

The tool is composed of two applications: the first implements a greedy fragmentation algorithm [CDFJPS09] (developed in C++), while the second realizes its Graphical User Interface (developed in Java).

2.1.3 Activity 4 - User Interfaces

Privacy-enhancing identity management will only be successful if its technologies are accepted and employed by the end users. For this reason, the research and development of user interfaces for PrimeLife technologies that are user-friendly and compliant with regulations will play a key role. The activity will work on UI Representation of privacy- enhancing identity management concepts, for trust and assurance as well as for for policy display and administration.

In particular, the protocols between the user side online functions for exercising user rights and the services sides' transparency support tools that respond to them need to be standardised. PrimeLife Partner KAU has already brought this up at the ISO/IEC JTC 1/SC

19 27/WG 5 meetings. The transparency tools that will be developed can also be provided as open source.

Besides, we have derived in PRIME a set of predefined privacy preferences, which a user could choose from and fill in with concrete data values or customise "on the fly" and store them under a new name. The predefined privacy preferences describe what types of data should be released for specific purposes under specific conditions and the type of pseudonymity and level of linkability to be used. This set also includes the most privacy- friendly options for acting anonymously or for releasing as little information as needed for a certain service. Such a set of predefined privacy preferences as well as mechanisms for customising them on-the-fly (i.e., when they are used in interactions with services sides) could be standardised and provided to our open source products.

Furthermore, the multi-layered structures of privacy policies as suggested by the Art. 29 WP working party in combination with mechanisms for obtaining informed consent for the disclosure of personal data/ proofs with credentials could be standardised. In task 4.3.2, we also mention in particular that the multi-layered structure for presenting policies could be extended by another top level consisting of standardised policy icons.

SendPersonalData dialog

The SendPersonalData dialog, also known as AskSendData dialog, is the component asking the user for selecting and confirming data sent by the local PRIME instance on client side to the PRIME instance on the Web server side. It is opened from the client side PRIME instance when a server requests personal data of the user. The default implementation is part of the PRIME core and thus written in Java.

The purpose is to

• give the user an informed consent about privacy circumstances, • query for PII data to expose to the server, and • manage personal information on-the-fly.

The dialog communicates with the local PRIME instance through two web service interfaces, so called common and restricted. The documentation of these interfaces are contained in the source code documentation of PRIME (interfaces CommonService and RestrictedService).

The current architecture of PRIME allows to plug-in any implementation of a SendPersonalData dialog and is not restricted to the current default implementation. However, only this does support all available features of the PRIME toolbox (e.g., PrivPrefs, Credentials, SPCC, Assurance Control Blacklists).

An alternative implementation of the SendPersonalData dialog has to extend the AskSendData abstract class. In particular, the select method has to be implemented. This method gets the description of the data requested by the server (Claim request) and has to return the data the user wants to disclose (Claim).

Additional functions are provided in the restricted web service to, e.g., build a new Claim out of PII information from the local PRIME instance's database. As of current implementation, to use this extended features of creating an own Claim from PII information, the dialog has to be written in Java and has to be called within the process space of the local PRIME instance.

20 It is planned to extend this enable non-Java SendPersonalData dialogs which can create own Claims.

The reputation prototype developed in Activity 1 uses an alternative SendPersonalData dialog. User interface extensions and research about HCI (Activity 4) is considered.

Other implementations of something like the SendPersonalData dialog exist in some application prototypes similar to PRIME. The effort to replace the current dialog by other simple PII selection dialogs like Higgins [Higgins] is moderate. However, more advanced features like AssuranceControl or the PrivPref management are very specific to PRIME, thus these features could only be reused with much effort in other open source software.

Using the SendPersonalData dialog as an open source project separated from PRIME is only possible if the local PRIME instance web services are implemented. The HCI design of the dialog is based on research results from within the PRIME project and represents a final HCI prototype. So, decoupling the SendPersonalData dialog from PRIME could be advisable, if the usage in a separate open source project is targeted.

DataTrack

The DataTrack is a tool integrated into the PRIME Core to display information about released information. It is currently written in Java. It uses a the web services provided by the PRIME client core to obtain information about transactions and past service partners of the user and shows them in different ways.

The main view is a filterable and sortable table presenting an aggregated list of past transactions there is also the option to us a graphical slider that zooms through the history of data sent to other parties, showing the revealed information in a card-like fashion. The data is obtained via a calls to the PRIME client cores different APIs. Other views shows specific data sent in one session or a summary of all data sent to a specific receiver.

In its current state the DataTrack uses API calls to SQL queries to obtain the data presented by the PRIME core. However, as long as the data is presented in a similar manner, presumably there is no reason why the current data source could not be replaced by any other source with minimal change in the code base. Currently the DataTrack implements all showing views as well as the possibility to retrieve, change and delete remotely stored data and compares them whit the locally store values. However, some work on the access control mechanisms for these functions on the services side remains. The DataTrack also displays a contact information window when a receiver is double clicked in any view.

Together with WP4.4 of PrimeLife, the new user interface will be evaluated. A first attempt to integrate a view of the Privacy Preserving Secure Log developed in Task 2.2.2 will also take place. Feedback will be integrated into the DataTrack and is used for the prototypes of WP1.2 and WP1.3

The functionality of the DataTrack would fit well into the Higgins (see Chapter 9) project. Working towards common interfaces supported by Higgins like WS-* would be advisable. It could be of further evaluation, whether existing products like iJournal from MozPETs suite, Higgins User Interface or the "Enhanced History Manager [EnhancedHistory]" plugin for Firefox could be enhanced to support the full feature spectrum planned for the DataTrack within PrimeLife.

21 2.1.4 Activity 5 - Policies

The goal of Activity 5 is to design security and privacy policy systems for PrimeLife. This includes analysis and formalisation of legal requirements, research of new policy mechanisms, machine-readable languages for privacy as well as design and analysis of actual policy decision engines for the demonstrators that we will build. Activity 5 focuses on requirements, research aspects and development of policy systems.

Today, privacy policies have largely focused on formalising permitted usage of personal data. They were often based on a central model where an administrator defined a single policy that covered given data items. The policy activity faces multiple research challenges besides satisfying the:

• Broader coverage: Privacy requirements cover more than mere access to information. Examples include retention of data or combination of data items. The goal is to cover the complete life-cycle of identities. • Policy Composition: In the emerging Web, data and policy will be composed. We need to find ways to enable distributed management and authoring of policies. • Stronger Enforcement: Today, privacy policies are mainly enforced by means of access control monitors. One challenge we face is to invent additional enforcement mechanisms that allow protection despite the dispersion of data and the limited trust in the processor of personal information.

Because of its wide-spread adoption in the real world, PrimeLife chose XACML as the stepping stone to embed its privacy-enhancing features. As a first step, a policy engine was developed that integrates XACML with the PRIME policy language, so that policies expressed in XACML can profit from the privacy-enhancing features of the PRIME language. Both the language design and the implementation of the first engine have been completed during the second year of the project. In a second step, PrimeLife will develop a purely XACML-based language and engine engine. We extended the XACML language to add support for credential-based access control, sanitized policy dialog, two-sided data handling policies (i.e., policies and preferences) with automated matching, and downstream usage control (i.e., restrictions on data sharing with third parties). The language and architecture extensions have already been defined during the second year, the implementation of the engine will be finished during the third year of the project.

2.1.5 Activity 6 - Infrastructures

The goal of Activity 6 is to investigate the impact of security and privacy requirements on infrastructures. It studies technical and non-technical (e.g., legal, economic) requirements for successfully implementing solutions on top of existing and newly developed infrastructural elements. In particular, its work packages are concerned with:

• Privacy-preserving identity management for service architectures and general issues of identity management infrastructures, with a special focus on web architectures, web service architectures, and related architectures. • Trusted Infrastructure elements using trusted devices as an infrastructure foundation. This includes especially mobile devices with Internet connectivity. • Privacy-enabled service composition.

As part of its work on Deliverable 6.1.1 Identity Management Infrastructure Protocols for Privacy-enabled SOA, this activity conducted a study of relevant ID management and login

22 protocols against privacy-enabled SOA scenarios. This work has benefitted the PrimeLife contribution to standards work on Architectures and Frameworks (see Chapter 4); it is anticipated that the relevant standards work will continue to benefit the ongoing work in WP 6.1.

The results described in the Smartcards (see Chapter 7) section strongly tie into the work of WP6.2 and correlate to the content of D6.2.1. Succeeding work will go into the same directions, namely focussing on further improvements. This work is connected to relevant standards efforts in ISO/IEC JTC 1/SC 17/WG 4 (see Chapter 4).

Legal requirements for service composition were derived in Work Package 6.3 and presented at the PrimeLife and W3C Workshop on Access Control Application Scenarios.

23 Chapter 3 Target Technology Platforms

Data is exchanged between various kinds of computer systems connected to the Internet. The target platform for PrimeLife work is generally the Web. This section reviews the architecture of the Web and its essential technology components. 3.1 The Web

3.1.1 Architecture

The World Wide Web (see [WEBARCH]) is a global information space built on top of a set of relatively simple technologies which, in combination, have enabled its world wide deployment, and have catalysed the Internet's growth and use over the last 15 years.

Key design elements of the World Wide Web include:

Identification Uniform Resource Identifiers [URI] serve to identify resources on the Web. They provide an abstraction layer for resource identification across protocols and across document formats. New URI schemes can be introduced without having to change the surrounding format (e.g., HTML, SVG, MathML). Extension by URI scheme enables deployment of new protocols without having to change surrounding document formats. However, it is not true that dereferencing the same URI will always result in the same protocol interaction. Further, different URI schemes expose different methods -- the HTTP protocol, e.g., supports both retrieval and information posting methods (GET and POST).

Interaction While URIs provide the means for extensibility in protocol space, a social agreement (codified in the set of supported protocols in user agents) leads to the choice of HTTP as the primary protocol for the Web. Key properties of HTTP include safe information retrieval (the simple retrieval of a Web page is by convention free of side effects; side effect bearing interactions can be distinguished) and negotiation of resource representations (depending on, e.g., language preferences and supported data formats).

24 Formats For data formats (as for protocols) the practically unlimited extensibility of the protocol and addressing layers is complemented by social conventions (codified in deployment) about the formats in use: Various variants of HTML, CSS, ECMAScript (with assorted APIs), and a few graphics formats effectively form the backbone of today's Web.

While this basic architecture remains intact, we are seeing a phase shift in the use of Web technology, toward serving as a universal platform for application development.

3.1.2 Standards

This section summarises the specifications that are at the core of today's instantiation of the Web: HTTP as the primary retrieval protocol, HTML and CSS as the primary formats used for documents and their style, and the 's application programming interfaces as used by the ECMAScript scripting language (more commonly referred to as JavaScript).

HTTP and TLS

The Hypertext Transfer Protocol [RFC 2616] is a fundamentally stateless request/response protocol that is used to interact with Web resources. Methods that can be used in this interaction include both simple retrieval (GET; a so-called safe method, as it must not cause side effects), submission of information (using POST or PUT), and other manipulations (using, e.g., DELETE). HTTP further supports content and language negotiation, redirection functionality, and advanced mechanisms for caching and proxying of requests.

Browser cookies -- while broadly deployed -- are a notoriously underspecified aspect of HTTP; yet, they serve a critical function on today's Web, by adding session management to an otherwise stateless protocol.

Authentication within HTTP is limited to simple username and password based approaches ("Basic" and "Digest" authentication), even though the protocol's framework can be extended to different authentication protocols. In practice, even these features go largely unused. Deployments mostly rely on HTML forms to solicit user names and passwords, and on cookie-based session management to tie a prior authentication transaction to a session. Identity systems for the Web take a similar approach: The basic identity transaction is either conducted on top of HTTP, or through a separate protocol, and the result of that transaction is then tied to a session.

Similarly, confidentiality and signature services are out of scope for HTTP. These functionalities are instead provided by the TLS protocol (formerly known as SSL). TLS, too, provides a framework for transporting user credentials ( [RFC 2818]).

Deployment of HTTP is virtually ubiquitous: HTTP servers can be found in about any network enabled device (often as configuration and control interface of choice); HTTP clients are found on mobile phones, gaming consoles, and of course personal computers.

The broad deployment of HTTP causes significant inertia: Changes to the HTTP protocol (e.g., a new version) can only be deployed mid-term. Standardization work on HTTP is therefore mostly focused on documenting current practice: The IETF HTTPbis Working Group [HTTP bis] is focused on specification maintenance and errata work; the HTTP state

25 Working Group [HTTP state WG] is chartered to provide a specification for HTTP cookies as currently deployed.

As far as security and identity mechanisms for HTTP are concerned, the currently active IETF working group is specifically chartered to only document the protocol's properties. Work on this document hasn't finished at the time of this report (see [HTTP bis Security Properties]).

HTML, CSS

The Hypertext Markup Language, HTML, is (like HTTP) at the root of the Web's stack of specifications. It gives authors the means to:

• Publish online documents with headings, text, tables, lists, photos, etc. • Retrieve online information via hypertext links, at the click of a button. • Design forms for conducting transactions with remote services, for use in searching for information, making reservations, ordering products, etc. • Include spread-sheets, video clips, sound clips, and other applications directly in their documents.

The object tag in HTML enables embedding of arbitrary objects, and subsumes the functionality of the deprecated applet tag. Extensions to HTML can also be based on using class names as an indicator for content's semantics. The microformat community [Microformats] is advocating this approach to embed semantic data with HTML documents.

Current versions of HTML are HTML 4.01 [HTML 4.01], the last SGML version of HTML, and XHTML 1.0 [XHTML 1.0], a reformulation of that language in XML. Note that XHTML 1.1 [XHTML11] further reformulates XHTML 1.0 according to a modular pattern, without substantive differences to the resulting mark-up.

Ongoing development is focusing on HTML5 [HTML5], an effort to provide an interoperable specification for HTML and associated APIs. Work on XHTML 2 [XHTML 2], once thought of as the next generation of the XML-based XHTML, has stopped.

Layout information for HTML (and other structured document formats, including XML applications) can be specified using Cascading Style Sheets [CSS 2].

Document Object Model, ECMAScript, XMLHttpRequest

The W3C Document Object Model (DOM [DOM Level 3 Core]) is an API that manipulates HTML and XML documents. It relies on a structure model of the document and defines accessors to the various components of this structure. It is platform-neutral and language neutral. It is organised in levels which specify required and optional features: Level 1 defines a core model and a basic API for HTML, Level 2 enhances the core model and adds views, events, style and traversal APIs. Level 3 has a more complete core model (including more DOM types) and provides load-and-save and validation APIs. All the W3C DOM Levels recommendations include bindings to Java and to ECMAScript. DOM is the preferred API to manipulate HTML and XML documents on the Web when accessing elements in non- sequential order is required.

ECMAScript, standardised by ECMA International (see [ECMAScript]) in 1999 is mostly used as a language to manipulate Document Object Models on the Web. It is the major language for client-side scripting, implemented in all common clients for all kinds of

26 platforms. ECMA TC39 [ECMA TC39] was actively working on the ECMAScript 4 language at the time of the first instance of this report. The TC has since reconsidered its approach and has published an ECMAScript 5th Edition [ECMAScript 5th Edition] specification. This specification is based on an effort that was previously known as ECMAScript 3.1, and was expected to be a subset of ECMAScript 4; work on ECMAScript 4 has halted. Development of the language is proceeding.

The XMLHttpRequest [XMLHttpRequest] object is another neutral API that allows scripts to perform HTTP client requests without reloading the Web pages. It builds on some parts of the DOM model and constitutes the core of the Ajax technique. The goal of such a specification is to unify the techniques used for dynamicity of content and achieve the necessary interoperability that proprietary technologies (ActiveX, inline frames, proprietary applets, Flash...) prevent. The specification is currently in W3C Last Call.

3.1.3 Evolving Web Application Development Paradigms - Web 2.0

The power of Web applications often comes from the easy combination of data and services across multiple sources: In the easiest case, a meeting description might include a map service inline, highlighting the meeting location. More complex mash-ups might combine any number of data sources, factor in personal information (e.g., travel plans) to answer the user's questions, and trigger activities on multiple possible services.

As application programming interfaces and client-side scripting have matured in recent browser generations, they have enabled an increasing shift of complexity toward the client side: Where much of the complexity of "Web 1.0" applications resided on the server side, "Web 2.0" and "AJAX" programming puts complexity on the client, and thinks of the server side as generic APIs that can be invoked by complex applications running on the client.

These choices have a profound impact on the security and privacy structure of the Web: On the one hand, there is increased susceptibility of services to abuse, as careless use of Web 2.0 design patterns might put security (and privacy) critical aspects of business logic on the client, and thereby in the hands of an attacker. More dangerously, mash-ups will often run scripts from different trust domains within a single domain of control (or within several, insufficiently isolated domains of control). As a result, the technical environment's enforcement of social and business agreements between different data processing parties becomes difficult in actually deployed mash-up environments.

Recent research and development in the area of JavaScript security models has the potential to help change this environment to improve the client-side Web programming environment's ability to enforce security and privacy policies. The open source Caja [Caja] project that extends JavaScript to include a capability based security model. This security model has been deployable for several years now, through a JavaScript-to-JavaScript compiler. Key ideas from Caja have found their way into the ECMAScript 5th edition specification.

On the other hand, modern Web application development techniques improve the abilities of collaborating actors to share user data and track behaviour, beyond what is possible with simple cookies, HTML pages, and forms. Classical privacy-enhancing techniques that might, e.g., impose controls on cookies or warn before form submissions are losing their usefulness, as new technologies enable storage of client-side state (e.g., to enable offline Web applications), tracking of users, and exchange of information between different sites.

27 It is an open research question what leverage points are best suited to build the enforcement of privacy policies into Web applications, and to create business and technical incentives for Web application developers to disclose privacy intents.

Additional approaches to data sharing in Web 2.0 scenarios involve the passing of personal information and authorisations between different sites through (mostly) redirect patterns. Relevant developments include OAuth (see Chapter 7) and OpenID (see Chapter 9).

PrimeLife Perspective

Overall, mash-ups and the use of Web 2.0 programming patterns to process personal information are going to stay with us. In the context of Activity 1, PrimeLife will analyse these use cases in more detail. In particular, work package 1.1 is concerned with how user- provided content can be made trustworthy. We already described them in the PrimeLife results (see Chapter 2).

Standardisation Efforts

Specifications relevant to Web 2.0 programming patterns are under development in a number of places: The W3C HTML Working Group [HTML WG], the W3C Web Applications Working Group [WebApps WG] and the W3C Device API Working Group [W3C DAP] are continuing development of the HTML5 [HTML 5] specification and assorted JavaScript APIs (including cross-origin communication, local storage, and access to additional device capabilities like cameras and microphones). Work previously covered by the W3C Web Application Formats [WAF WG] and Web API Working Groups [Web API] has been taken over by the Web Applications WG.

Key ideas from Caja [Caja] have found their way into the ECMAScript standardisation work at ECMA TC39 [ECMA TC39], and into the resulting ECMAScript 5th edition specification. Other relevant work is either being done under the umbrella of open source projects or ad-hoc initiatives, and some of it is being introduced into the ongoing ECMAScript standards work.

3.1.4 Semantic Web

The Semantic Web provides a common framework that allows data and metadata to be shared and reused across application, enterprise, and community boundaries. Key ingredients of this framework include the simple, yet powerful data model of the Resource Description Framework [RDF-PRIMER]; a standardised query language [SPARQL]; and an ontology language [OWL] that enables machine-readable expression of the relationships between different concepts.

The usage of URI references as identifiers enables different parties to coin new terms without the risk of clashing with others, thereby enabling easier integration and mixing of data.

From a privacy perspective, Semantic Web technology will enable progressively easier and more powerful aggregation of personal information. Where a lack of integration might in the past have helped individual privacy, the Semantic Web promise of overcoming that gap.

The power of Semantic Web technologies cuts both ways, however: It also eases the tasks of identity management by providing a framework that can be used to express not just personal information, but also privacy practices, policies, and preferences. As broader and more

28 effective data integration becomes possible, distributed and scalable compliance monitoring becomes possible, leading to improved accountability of those who process personal information.

3.1.5 Privacy technologies for the Web

Very few Privacy-Enhancing Technologies (PET) are standardised today. There is a large variety of tools (see Chapter 9) that do not interoperate. User preferences put into one tool can't be used in another tool. This way it is also very hard for E-Commerce sites to obtain predictable results when designing their portals. P3P is the only major exception to this rule. It is discussed together with other relevant standards in more detail separately in this document, specifically P3P (see Chapter 6) and APPEL (see Chapter 6). Unlike P3P, APPEL has never reached the status of a W3C Recommendation. 3.2 Service Oriented Architectures

The basis for this text is an updated version of Geuer-Pollmann/Claessens [Geuer-Pollmann/ Claessens]. The term ‘Web services’ is found nearly anywhere in the enterprise platforms and networking domains. Generally speaking, ‘Web service’ refers to the transfer of XML via internet protocols, such as HTTP or SMTP. The ‘Simple Object Access Protocol’ (SOAP Version 1.2, 2007 [SOAP12]) is an XML based protocol, which provides a definition how structured and typed information can be exchanged between peers in a distributed and decentralised environment.

In order to provide security, reliability, transaction abilities and rich meta-data support for Web services, additional specifications exist on top of the XML/SOAP stack. Figure 1 provides an overview of important Web service specifications and how they relate to each other. Each box in the figure denotes a separate protocol standard in the web services world.

When standards built on top of each other they are arranged as stacked boxes, e.g., WS- SecureConversation builds on WS-Trust which builds on WS-Security. Several protocols can be grouped togehter in protocol families which is express by grey boxes. Apart from the individual standards also the standard families can built on top of each other, e.g., Messaging protocol layer relays on the founation layer for message transport.

The standards in Figure 1 are color coded, indicating the defining organization and the status of the specification. The core technologies in the XML space are all defined by the World Wide Web Consortium (W3C). Many of the higher-layer specifications are driven by multiple industry players, including Microsoft, IBM, BEA Systems, SAP and others. These specifications help to make progress in the interoperability between the different industry platforms, most notably Microsoft’s .NET [MS .NET] platform and IBM’s WebSphere [IBM WebSphere] software. The results are often donated to standards organisations like OASIS [OASIS], which will care for their future evolution.

XML lays the basis for all the standards, since it is the omnipresent encoding schema. Other basis technologies in this group are SOAP messages describing service invocations, its transport protocols HTTP/UDP, and basic XML cryptography schemas. The next layer, called 'Messaging', groups standards that provide advanced communication features such as flow control, transactions, establishing of secure communication channels, and trust establishment. The layer 'Infrastructure and Profiles' layer contains standards which typically affect the interaction between multiple services, such as interoperability, trust establish between

29 services/parties across organizational borders, and distributed management. The 'Metadata' layer is somehow orthogonal to the latter three layers. It groups standards that deliver information how to find and invoke a service, e.g., address lookup, specific meta-data, security policy, and service interface.

Figure 1: Overview of existing Web service specifications and their relations.

3.2.1 OASIS WS-Security

The WS-Security [WS-Security] specification defines mechanisms for integrity and confidentiality protection, and data origin authentication for SOAP messages and selected parts thereof. The cryptographic mechanisms are utilised by describing how XML Signature and XML Encryption are applied to parts of a SOAP message. That includes processing rules so that a SOAP node (intermediaries and ultimate receivers) can determine the order in which parts of the message have to be validated or decrypted.

These cryptographic properties are described using a specific header field, the header. This header provides a mechanism for attaching security-related information to a SOAP message, whereas multiple headers may exist inside a single SOAP message. Each of these headers is intended for consumption by a different SOAP intermediary. This property enables intermediaries to encrypt or decrypt specific parts of a message before forwarding it or enforces that certain parts of the message must be validated before the message is processed further.

Besides the cryptographic processing rules for handling a message, WS-Security defines a generic mechanism for associating security tokens with the message. ‘Associating a security token’ means that one or more tokens are included in headers in the message

30 and that a referencing mechanism is introduced to refer to these tokens. Tokens generally are either identification or cryptographic material or it may be expressions of capabilities (e.g., signed authorisation statements). For instance, the certificate for signature validation may be added into the header. That may be done by either placing it into the signature itself (which makes re-usage a bit complicated and fragile) or by directly making it a child of the header and referencing it from the signature. The latter use has the advantage that other signatures or security operations may directly refer to that token.

WS-Security, available in version 1.1 since February 2007, defines a simple username token, a container for arbitrary binary tokens (base64 encoded), a container for XML-formatted tokens, and an encrypted data token. Additional specifications define various ‘token profiles’ that introduce special token formats. For instance, the X.509 Certificate Token Profile 1.1 [X.509 Certificate TP 1.1] defines how X.509 certificates, certificate chains or PKCS#7 certificate revocation lists may be used in conjunction with WS-Security. The ‘Username Token Profile 1.1’ extends the existing username token by adding literal plaintext passwords, hashed passwords, time variant parameters (nonce) and creation time stamps. The Rights Expression Language (REL) Token Profile 1.1 [REL TP 1.1] links WS-Security to ISO/IEC 21000-5. The Kerberos Token Profile 1.1 [Kerberos TP 1.1] defines how Kerberos tickets are embedded into SOAP messages and the SAML Token Profile 1.1 [SAML TP 1.1] defines how SAML 1.1 and 2.0 assertions (see also [SAML 2.0 Bindings]) can be included.

WS-Security is one of the basic security specifications in the Web service world. Therefore it is definitively relevant for PrimeLife.

3.2.2 OASIS WS-SecureConversation

The WS-Security specification introduced the concept of message level security. By utilising only WS-Security to encrypt and sign Web service messages, a lot of overhead related to key management is necessary. For instance, if a Web service requires each message being encrypted using a 2048-bit RSA operation and given the fact that 1000 service invocations may happen during the next 3 minutes, it becomes obvious that this concept does not scale very well.

In the transport layer, HTTP 1.1 permits to keep an existing SSL/ TLS connection open so that subsequent requests to a Web server may be sent via the already established secured connection. WS-SecureConversation [WS-SecureConversation 1.4] brings this concept into the Web services world. This is done by introducing mechanisms to establish and share so- called ‘security contexts’. Based on established security contexts or arbitrary already existing shared secret keys, WS-SecureConversation provides mechanisms to derive shared key material (read: session keys).

Security contexts can be established in three different ways. First, a security context token (SCT) may be retrieved using the mechanisms of WS-Trust. In that case, the requestor retrieves the SCT from some security token service that is trusted by the Web service. The second way is that the requestor creates an own SCT and sends that SCT to the Web service. The problem may be that the Web service may not trust the requestor to create an appropriate SCT and may reject that self-created SCT. A third option is that both the requestor and the Web service mutually agree on a security context using a challenge-and-response process. An established SCT is afterwards used to derive session keys. These session keys may then be used for subsequent message encryption and message authentication codes (symmetric ‘signatures’) with WS-Security.

31 In the scope of PrimeLife, as in most other Web service based solutions, WS- SecureConversation is a reasonable addition to WS-Security.

3.2.3 OASIS WS-Trust

The WS-Trust [WS-Trust 1.4] specification introduces the concept of ‘security token services’ (STS). A security token service is a Web service that can issue and validate security tokens. For instance, a Kerberos ticket granting server would be an STS in the non-XML world. A security token service offers functionality to issue new security tokens, to re-new existing tokens that are expiring and to check the validity of existing tokens. Additionally, a security token service can convert one security token into a different security token, thus brokering trust between two trust domains.

For example, a Web service describes required security tokens for Web service calls using WS-SecurityPolicy/PolicyAttachment. A requestor may want to call that specific Web service but may not have the right security tokens indicated by the policy. The Web service may require SAML credentials from a particular trust domain whereas the requestor only has an X.509 certificate from its own domain. By requesting the ‘right’ matching token (credential) from the security token service, the requestor may get back a token from the STS that can be included when calling the Web service in question. The decision what exactly the ‘right’ token is can be made either by the requestor or by the STS. The requestor may inspect the Web service’s policy and specifically ask the STS: "I have the attached X.509 certificate and need a SAML token". The other option is that the requestor includes its possessed tokens and states what Web service it intends to call: "I possess the following tokens and I would like to call the Web service http://foo/bar. Please give me whatever token may be appropriate."

WS-Trust provides a rich interface that permits the implementation of various use cases. For instance, the requestor may include time variant parameters as entropy for a token generation process. The token service may return secret key material to the requestor (so-called proof-of- possession tokens) along with the requested security token, so that the requestor can prove that it possessed the security token. For instance, the requested security token may be a certificate whereas the proof-of-possession token is the associated private key. The security token service may also return multiple keys like a certificate along with its validation chain or it may create key exchange tokens with which the requestor can encrypt key material for the intended Web service. A requestor can also express requirements on algorithms and key strengths for required tokens.

WS-Trust defines protocols including challenge-and-response protocols to obtain the requested security tokens, thus enabling the mitigation of man-in-the-middle and message replay attacks. The WS-Trust specification also permits that a requestor may need a security token to implement some delegation of rights to a third party. For instance, a requestor could request an authorisation token for a colleague that may be valid for a given time interval. WS- Trust utilises WS-Security for signing and encrypting parts of SOAP messages as well as WS-Policy/SecurityPolicy to express and determine what particular security tokens may be consumed by a given Web service.

WS-Trust is a basic building block that can be used to rebuild many of the already existing security protocols and make them fit directly in the Web services world by using Web service protocols and data structures. It is thus essential for service composition in PrimeLife.

32 3.2.4 W3C WS-Policy

The Web Services Policy Framework (WS-Policy) [WS-Policy 1.5] provides a general- purpose model to describe Web service related policies. A policy can describe properties, requirements and capabilities. For example, a policy may mandate that a particular Web service only provides services between 8:00 AM and 5:00 PM or that service requests must be signed using an X.509 certificate (of course not by the certificate but by its associated key). Policies also allow to define different available options, so that machines can figure out based on their own policy and a service’s policy what requests may be accepted and what requests may be not. WS-Policy by itself only provides a framework to describe logical relationships between policy assertions, without specifying any assertion. WS-PolicyAttachment [WS- PolicyAttachment 1.2] attaches policies to different subjects. ‘Web service related’ means policies apply to service endpoints or to XML data. A policy can be attached to an XML element (by embedding the policy itself or a link to the policy inside the element) or by linking from the policy to the subject that is described by the policy. WS-PolicyAttachment also defines how policies can be referenced from WSDL documents and how policies can be attached to UDDI entities and stored inside a UDDI repository. These repositories are typically used as yellow pages for service lookups. WS-MetadataExchange [WS- MetadataExchange 1.1] defines protocols to retrieve metadata associated with a particular Web services endpoint. For example, a WS-Policy document can be retrieved from a SOAP node using WS-Metadata.

3.2.5 OASIS WS-SecurityPolicy

WS-SecurityPolicy [WS-SecurityPolicy 1.3] defines certain security-related assertions that fit into the WS-Policy framework. These assertions are utilised by WS-Security, WS-Trust and WS-SecureConversation. The ‘SecurityToken’ assertion tells a requestor what security tokens are required to call a given Web service (‘security tokens’ are described in the sections WS- Security and WS-Trust). Integrity and confidentiality assertions identify the message parts that have to be protected and it defines what algorithms are permitted. Visibility assertions identify what particular message parts have to remain unencrypted in order to allow SOAP nodes along the message path to operate on these parts. The ‘MessageAge’ assertion enables entities to constrain after what time a message is to be treated as expired.

3.2.6 Primelife Perspective

Web services are an important building block to realize distributed multi-party business processes. The PrimeLife document H6.3.1 is investigating requirements for privacy- enhanced SOA. It develops a comprehensive set of requirements for Service-oriented architectures facilitating both legal compliance and privacy legislation. The document offers possible technical solutions to meet each of the requirements. Techniques and instruments are briefly described as possible solutions in the requirement section of this document. It turned out that the current set of standardised web service protocols offers a good basis for trust establishment and securing a converstation against distrusted thrid party. However, with regard to most of the techniques described in H6.3.1, quite some additional research has to be done before they will be mature enough to be implemented in existing SOAs or even to be standardized.

Moreover the eCV demonstration scenario by Activity 6, which is outlined in PrimeLife document H6.1.1, is to a large extent realized with web services. We certainly use here security and trust establishing web service protocols.

33 Chapter 4 Specification Developing Organisations

4.1 W3C

4.1.1 Overview

The World Wide Web Consortium [W3C] is an International Consortium with over 400 member organisations from more than 40 countries. W3C Members include vendors of technology products and services, content providers, corporate users, research laboratories, standards bodies, and governments, all of whom work to reach consensus on a direction for the Web.

W3C's main role is to standardise Web technologies by creating and managing Working Groups, that produce specifications (called Recommendations) to describe the building blocks of the Web. W3C produces them freely available to all, as part of the Web open platform. Working Groups participants are individuals from member companies or invited experts selected on their expertise in the area. W3C also opens Interest Groups, to bring together people who wish to evaluate potential Web technologies and policies. An Interest Group is a forum for the exchange of ideas. The W3C technical team contributes and coordinates the activities in the various fields: Web Architecture, Protocols, Web applications, Ubiquitous Web, Interaction, Web Services, Semantic Web, Privacy and Security, Web Accessibility, Internationalisation.

Particularly relevant W3C work for the PrimeLife project includes:

• The Platform for Privacy Preferences ( [P3P 1.0 Spec], [P3P 1.1 Spec]) • APPEL [APPEL] • XML Signature (XML Signature Syntax and Processing Recommendation [XML Sig], in collaboration with IETF (see Chapter 4)) • XML Encryption (XML Encryption Syntax and Processing Recommendation [XML Enc], Decryption Transform for XML Signature [XML Sig Transform])

34 • Web Services Policy (Web Services Policy Framework [WS-Policy 1.5], Web Services Policy Attachment [WS-PolicyAttachment 1.5])

Currently, several Groups are specifically involved in Privacy and Security for the Web:

• Web Security Context Working Group ( [WSC]) • XML Security Working Group ( [XmlSec]) • Device API and Policy Working Group ( [W3C DAP]) • Policy Languages Interest Group ( [PLING])

The (see Chapter 2) section also describes a number of key technologies for the Web, at the core of W3C work since its creation.

4.1.2 W3C and PrimeLife Workshop on Access Control Application Scenarios

In November 2009, PrimeLife and W3C jointly organized a Workshop on Access Control Application Scenarios [Workshop on Access Control Application Scenarios], held in Luxembourg.

The workshop attracted 20 position papers [Position Papers] from submitters across industry and research. Submitters included participants in the MASTER, SWIFT, and TAS3 EU projects, the ENCORE UK project, and several submissions from within PrimeLife. The workshop was chaired by Rigo Wenning (W3C and PrimeLife) and Hal Lockhart (Oracle; chair, OASIS TC XACML).

As XACML is a widely used language, most papers and input to the workshop referred to it. But the discussions weren't strictly limited to XACML: There was some focus on using XACML to implement privacy friendly identity management, but the variety of use cases and papers submitted led to more general discussion. As a result, the Workshop identified extension points for XACML that are further detailed below. We anticipate that further discussion will be needed in the areas of obligation languages and data handling policies; these areas will be at the core of a second W3C and PrimeLife workshop.

Discussion at the workshop led to the conclusion that definitions of attribute semantics were of interest. The chair invited all participants to contribute their semantics to the TC XACML that could act as a clearing house for those ontologies. This way, duplication of attributes could be avoided and a cleared vocabulary could be standardized for a wider audience.

But the relations of attributes to each other are also of high interest as it allows for complex scenarios and evenly complex matching algorithms. These are out of scope for the current charter of TC XACML and are much closer to Semantic Web initiatives. Vocabularies making use of such relations should be contributed to W3C's Policy Language Interest Group (PLING).

Attributes

The workshop gathered many people with solutions for very specific issues. As was already mentioned in the call for participation, privacy was one of them. Privacy has a special relation to access control. Therefore, privacy friendly access control scenarios were presented. Mostly, they used XACML out of the box, but added the necessary semantics. XACML creates interoperability by allowing a unified access control over a heterogenous IT landscape. But to

35 expand to inter-enterprise interoperability or to wider use on an Internet scale, XACML needs semantics filling out its own framework that makes access control conditions predictable and interoperable even where there was no prior agreement on the semantics of the access control conditions.

The participants agreed that the basis for a more widespread use of XACML are met as the language is already widely used and implemented in closed shop scenarios. This gives the right momentum to expand further into the Internet sphere.

XACML Policies take a number of kinds of Attributes as inputs. Subject Attributes are most often used and the last ten years has seen the development of a variety of mechanisms for accessing and distributing them. The second most frequently used type is Resource Attributes. Currently there are almost no standards for storage, retrieval or distribution of Resource Attributes. In both case, current practice is largely to define Attribute syntax and semantics on an ad-hoc basis. This approach with its risk of homonyms and synonyms and half matching semantics is lacking a consistent approach to cross company border interoperability for access control.

The participants presented their use cases and the relevant attribute vocabularies during the workshop. PrimeLife presented a privacy vocabulary. UPC presented access control using FOAF and ODRL together to get the necessary semantics while using XACML for the policies. Other work on attribute vocabularies for export control, geospatial data and health care data were presented in the workshop. The gap between natural language and formal languages is huge. The reduction of complexity of the former into the latter is tricky and needs further work.

The chair invited all participants to contribute their semantics to the TC XACML that could act as a clearing house for those ontologies. This way, duplication of attributes could be avoided and a cleared vocabulary could be standardized for a wider audience.

But the relations of attributes to each other are also of high interest as it allows for complex scenarios and evenly complex matching algorithms. These are out of scope for the current charter of TC XACML and are much closer to Semantic Web initiatives. Vocabularies making use of such relations should be contributed to W3C's Policy Language Interest Group (PLING).

In a more complex scenario, as presented at the workshop, with multiple distributed decision points, matching of policies can be challenging. On several occasions, the question about a standardized matching came up. But matching by itself does not seem to be sufficient. Furthermore, a mechanism to create the resulting agreement has to be drawn up, including a feedback mechanism in case the matching fails to produce an agreement and the parties wonder why. Finally, a mechanism to visualize XACML Policies was suggested to ease the complexity and allow people to understand what the policy actually means to them.

As XACML 3.0 is in freeze right now to push the drafts on to the OASIS standardization track, there is not much space for changes that would change the schema. But changes which simply involve profiling the use of existing XACML features can be accommodated at any time. For example, the attribute semantics actually operate within the current XACML framework and are, thus, independent of the current drafts and their evolution. A separate evolution of attribute specifications was encouraged and a corresponding initiative should be brought to the TC XACML.

36 Many of the participants in the workshop are affiliated with organizations which are already members of OASIS and could join the XACML TC immediately. In other cases, it may be desirable to join OASIS. Non-members can contribute by posting to the XACML comment list, which only requires signing a short feedback agreement. For information, see the comment submission page [XACML comments].

Sticky Policies

An important new application area called data handling was represented in a number of workshop papers. Data handling refers to the distribution and storage of information relating to individuals. The primary requirement here is privacy protection. In order for privacy to be protected at all times, the privacy polices must travel with the data, and every party that receives and distributes the data must enforce them. This capability is referred to as sticky polices.

The workshop discussed the binding of a policy to a traveling resource but also to the transfer of data sets and the policing of data warehouses. Apart from the missing interoperability on the semantic side, there is a need for policy expression and a need for a means to transport and bind policies. A parallel issue exists for DRM too. Binding policy to data that survives cascades of XACML PDPs would be an extension to XACML. People largely agreed that this extension is needed. There are several possibilities that could co-exist. There was discussion about using a binding like in XML Signature (detached and in line). There could be an online data store that contains the bindings, so the PEP could just ask there. The policy could also be bound to the payload. At the workshop, there was no clear consensus in favor of either solution.

Participants then went on to discuss the probability of whether the agreed policy will be followed. Several scenarios about enforcement were drawn up: This turned into a discussion about trust. People also agreed that the term "policy" in this sense was the agreement between both sides. This means there must be a mechanism on policy agreement and a format as a result of this agreement. But can the enforcement by the remote PEP be trusted? Do we need a system for revocation of allowances? CARML of Liberty Alliance was mentioned as one of the possible formats to use. But it would again require the elaboration of further attributes. The different participants in the workshop were encouraged to push the different suggestions presented on to TC XACML for further consideration and to actively defend those viewpoints during standardization.

An additional issue came up while considering that access policies with conditions travel around. The sending service has a set of policies, but also the receiving service has already a certain set of policies (endogenous policies). In practice, those policies must be combined in order to compute a concrete result on whether access can be granted, or whether the receiving service is able to accommodate the requirements from the sending service. It became quickly clear that the combinability of policies turns into a major requirement once more complex distributed systems or ad-hoc systems are considered. There are several algorithms already available, but none of them is currently standardized. But standardization of the algorithm of combination is needed to design policies and systems with predictable results. XACML currently provides a built in set of policy combining algorithms, but further work is need to determine their suitability for this application.

37 Obligations

For privacy policies but also for other areas to be policed, there are conditions and actions that are not tied to an access control event. For the moment, XACML has an intentionally underspecified element that people were using in creative ways. But this underspecification has the side effect of undermining interoperability of such obligations. Thus, one cannot be sure whether the specified actions are actually performed by the receiving service. One of the immediate requirements was that if the receiving service doesn't understand the obligation, it should deny access with a feedback to the requester.

PrimeLife inherited an obligation language from the PRIME project and developed it further. This was presented at the workshop. Also, TAS3 presented an original use of the element to the audience and argued for possibility to use multiple obligation languages. The protocol towards an agreement on which language to use was seen as the corner stone for further interoperability. Others suggested the use of Semantic Web technologies and the use of the W3C Rule Interchange Format (RIF).

XACML 3.0 has several new features in this area. In addition to Obligations, which relate to access control enforcement, a new element called Advice can be used to associate any information with a decision. There is also work currently in draft form to define families of semantics for Obligations. The latest draft is the 28 December 2007 Working Draft of XACML v3.0 Obligation Families version 1.0 [XACML Obligation Families]. But there were still too many open questions to come to a conclusion. Several projects announced that they will coordinate their efforts to come up with a suggestion for TC XACML.

Credential based Access Control

Credential based Access Control would allow for a more privacy friendly access control system that would also be more widely useable on the Web. The aim is to prove only selected attributes as need for the task at hand. There is already a large set of literature on capabilities, but XACML currently does not have the ability to identify the type of credential used nor to specify, which credential is needed to get access to a certain resource. This is more or less a special case of the attributes topic with additional protocol issues. One way to convey the credential would be to use SAML, but SAML only allows XML Signature as a proof token.

Somewhat in the same direction but with a different angle was the question on how to establish a user feedback channel in XACML, so that missing credentials and things can be given by the user interactively. This would prevent the current "give-me-all-you-have" approach. 4.2 ISO TMB Privacy Steering Committee

In 2009 the ISO Technical Management Board (ISO TMB) has decided to create a new Privacy Steering Committee (PSC 01). The aim of this PSC is to explore and advise TMB on ISO technical standards that can support the implementation of public policy initiatives on privacy, with specific focus on protection of personally identifiable information (PII) and fair information handling. The first meeting of the PSC took place on 24 February 2010.

The ISO/TMB PSC 01 currently works on its terms of reference based on the recommendations from the TMB TF privacy with the following elements probably being included:

38 • improve the information sharing and coordination among committees engaged in privacy work e.g. by holding of a conference between all involved committees, • develop a common terminology reference system, and • implement a live public inventory of privacy initiatives.

For 2010-10-08/09 13:00 till 12:00 an ISO International Privacy Conference is planned in Berlin, Germany. Invitations will be to ISO and JTC 1 committees, data protection organisations and authorities, APEC, ISTPA, OECD, and CEN to submit position papers identifying their suggestions/ proposals to contribute to the ISO standardization work. Based on the outcome of the conference recommendations by the PSC will be prepared.

PrimeLife partners GUF, KU, and ULD participate in the PSC. 4.3 ISO/IEC JTC 1 "Information technology"

In the following, a summary of standardization activities under ISO/IEC JTC 1 is given.

4.3.1 ISO/IEC JTC 1/SC 27/WG 5 "Identity Management and Privacy Technologies"

Considering the promising new ways in which we use technologies in our daily life and the important challenge to handle an individual’s identity and personal information appropriately in the process, SC 27 has established its new WG 5 on Identity Management and Privacy Technologies in May 2006. Currently WG 5 is active in 9 standardisation projects accompanied by 2 Standing Documents with more being expected. The nine projects line up as follows:

• Committee Draft 24745 Biometric Template Protection was set out to describe security techniques for Biometric Template Protection focusing on Privacy- Enhancing Techniques for Biometric Template Generation. However its scope has widened towards protection biometric information also beypnd templates, so a name change to Biometric Information Protection was agreed in WG 5, that is currently under approval by JTC 1. • A Framework for Identity Management (Committee Draft 24760) addresses the secure, reliable, and privacy respecting management of identity information considering that identity management is important for individuals as well as organizations, in any environment and regardless of the nature of the activities they are involved in. • The project on Authentication Context of Biometrics has now produced International Standard 24761 "Information technology — Security techniques — Authentication context for biometrics" (First edition 2009-05-15), that defines the structure and the data elements of authentication context for biometrics, by which service providers (verifiers) can judge whether a biometric verification result is acceptable or not. • A Privacy Framework (Committee Draft 291000) to provide a framework for defining privacy safeguarding requirements as they relate to personally identifiable information processed by any information and communication system in any jurisdiction. • A Privacy Reference Architecture (Working Draft 29101) to provide a reference architecture model that will describe best practices for a consistent technical implementation of privacy requirements in information and communication systems.

39 • Entity Authentication Assurance (Working Draft 29115) aims at describing the guidelines or principles that must be considered in entity authentication assurance and the rationale for why it is important to an authentication decision. A title change towards "Entity authentication assurance framework" has been approved at the SC 27 level and is at the moment under approval by JTC 1. • A Framework for Access Management (Working Draft 29146) aims to provide a framework for the definition of access management and the secure management of the process to access information. This framework is intended to be applicable to any kind of users, individuals as well as organizations of all types and sizes, and should be useful to organizations at any location and regardless of the nature of the activities they are involved in. • Privacy Capability Maturity model (Working Draft 29190) to describe a privacy capability maturity model to provide guidance to organizations for assessing how mature they are with respect to their processes for collecting, using, disclosing, retaining and disposing of personal information. The document may also be used by third parties for the purpose of maturity assessment. • Requirements on relative anonymity with identity escrow (Working Draft 29191) defines requirements on relative anonymity with identity escrow based on the model of authentication and authorization using group signature techniques. This is to provide guidance to the use of group signatures for data minimization and user convenience and applicable in use cases where authentication or authorization is needed. It allows the users to control their anonymity within a group of registered users by choosing designated escrow agents.

Two Standing Documents (SD) encompass

• SD 1: A Roadmap for WG5, as an overview of existing projects, work items, and activities of WG 5, as well as possible fields of future work as discussed by WG 5. • SD 2: Official Privacy Documents References List, provides introductory guidance on privacy-related references to assist individuals, organizations, enterprises and regulatory authorities in ◦ identifying the adequate documentation and/or contact information to the privacy issues, initiatives and risks, ◦ improving the understanding of specifications and guidelines to develop privacy policies and practices, and ◦ introducing the implications of privacy-related laws and regulations

Two Study Periods dealt with Privacy Capability Maturity Models and Access Control Mechanisms. The first Study Period led to the new Project "Privacy Capability Maturity model (Working Draft 29190)". The second Study Period triggered an SC 27 wide Study Period on Access Control, that is still ongoing, as access control in its entirety was considered an SC 27 wide topic.

It should also be mentioned that other Working Groups in SC 27 maintain a number of projects with a relevant relation to privacy, most notably ISO/IEC JTC 1/SC 27/WG 3 Security evaluation criteria, that is responsible for IS 15408 Evaluation Criterial for IT Security and its Privacy Class covering anonymity, pseudonymity, unlinkability, and unobservability.

With 57 member countries ISO/IEC JTC 1/SC 27 has an immense global outreach. At the same time WG 5 has a significant topical overlap with PrimeLife and combines openness for Privacy and Identity Management aspects with a solid foundation in IT Security. Therefore

40 PrimeLife has established a liaison to ISO/IEC JTC 1/SC 27/WG 5 in 2008, that is working very successfully with a lot of helpful mutual impact.

PrimeLife is active in this group with participation by four partners (KS, GUF, GD, ULD), with one of the involved individuals being one of the Editors of 24760, another beeing the Convener of WG 5, and a 3rd being the Acting Vice Convener of WG 5. Moreover PrimeLife is contributing by ad-hoc editing and liaison statements.

4.3.2 ISO/IEC JTC 1/SC 17/WG 4

Since its establishment 1986 this group has been instrumental in specifying the standards for smart cards, in particular with contacts, and for smart card applications irrespective of the communication technology. Privacy aspects of card applications, in articular related to authentication protocols are occasionally discussed in this group.

Present activities involve the revision of ISO/IEC7816-4 and the completion of the ISO/ IEC24727 series.

An expert from PrimeLife partner GD is attending this group.

4.3.3 ISO/IEC JTC 1/SC 17/WG 11

A biometric system always splits into two phases: biometric reference data is collected in a registration phase from the user group and current biometric data will be compared with this reference data in the verification phase. Usually, the reference data is stored in a database or portable data carrier such as a smart card. The match-on-card technology allows to perform the comparison of the biometric data within the smart card, which means that the critical reference data never has to leave the smart card chip. It enhances the security and privacy of an application requiring the security status of the comparison in the card.

SC 17/WG 11 was founded in 2005 to address applied biometrics on cards. Its main focus is the standardisation of on-card matching. The document ISO/IEC 24787 addresses the architectures, parameters and usage of on-card matching. It supplements the existing standaridsation landscape and gives a guidance for the application programmer to understand, implement and make use of the match-on-card technology. The group should be supported because its outcome are documents enabling to raise the privacy in daily life, particularly in digital identities used online.

PrimeLife partners decided to be no longer active in this group. Therefore the responsible partner shifted the resources to other more relevant standardization topics for PrimeLife.

4.3.4 ISO/IEC JTC 1/SC 37

SC37 was founded when the industry acknowledged that a significant level of standardisation is required to achieve acceptance of the biometrics technology in open mass markets and applications such as e-passports. The group addresses various aspects such as vocabulary, interfaces, data formats, testing and applications profiles as well as jurisdictional aspects of biometrics.

Raising the knowledge of biometrics amongst users and customers is one of the most important subjects for leading biometric vendors. Biometrics are not as secret as cryptographic keys. Biometric data, however, must be considered confidential personal

41 information and has to be protected against misuse. From the viewpoint of privacy, the standardisation activities should be observed and influenced where necessary to protect user rights on individual data.

A variety of standards addressing biometrics are available today. Most standards can be purchased though the national standardisation committees. The documents currently under revision can only be accessed by the ISO committee.

PrimeLife partners decided to be no longer active in this group. Therefore the responsible partner shifted the resources to other more relevant standardization topics for PrimeLife. 4.4 OASIS

The Organisation for the Advancement of Structured Information Standards [OASIS] perceives itself as “a non-for-profit consortium that drives the development, convergence and adoption of open standards for the global information society.” The organisation was founded in 1993 and has currently “more than 5,000 participants representing over 600 organisations and individual members in 100 countries.”

The OASIS specifications which are of most interest for PrimeLife are WS-Trust [WS-Trust 1.4], WS-Federation (see Chapter 9) (work in progress) and WS-SecurityPolicy [WS- SecurityPolicy 1.3]. They are publicly available and have broad industry support. The core specifications have been around since 2002. The base specifications such as XML signature are robust, mature and well adopted.

These specifications depend on open standards such as SOAP Version 1.2 [SOAP12], XML Signature [XML Sig], XML Encryption [XML Enc], etc. They are stable specifications, with rich implementation support from multiple vendors. They are XML schemas and many of them are ratified by OASIS. Compatibility is likely given their widespread adoption.

The OASIS XACML Technical Committee [OASIS XACML] develops the XACML (see Chapter 5) policy language that is an important target for the PrimeLife project. Connections with this Technical Committee were established through the W3C Workshop on Access Control Application Scenarios (see Chapter 4), which was co-chaired by one of the chairs of the XACML TC.

The IPR on these set of specification is primarily based on the OASIS IPR charter [OASIS IPR charter]. In general the mentioned OASIS specifications are well-founded and established both in industry and academic. 4.5 Kantara Initiative and Liberty Alliance

4.5.1 Kantara Initiative

Founded in April 2009, Kantara Initiative is aimed at building an umbrella organization for the entire identity industry, and to streamline different initiatives. The initiative was formed by the DataPortability Project, the Concordia Project, Liberty Alliance, the Internet Society (ISOC), the Information Card Foundation (ICF), OpenLiberty.org and XDI.org. Kantara aims to address the following industry issues: Interoperability and Compliance Testing; Identity Assurance; Policy and Legal Issues; Privacy; Ownership and Liability; UX and Usability;

42 Cross-Community Coordination and Collaboration; Education and Outreach; Market Research; Use Cases and Requirements; Harmonization; and Tool Development.

Kantara is not a standards setting organization (SSO) for technical specifications. The output of Kantara Initiative is called a Recommendation. Any kind of work can be done in Kantara Initiative but if the work is a technical specification it must be submitted to an SSO upon completion.

The Kantara Initiative has subsumed the work previously done within Liberty Alliance.

4.5.2 Liberty Alliance

Liberty Alliance [Liberty Alliance] was founded in 2001 as a business alliance, with the goal of defining and driving open technology standards, privacy and business guidelines for federated identity management. Its 160 members are vendor companies, consumer companies, government and education organisations. In addition, Special Interest Groups are open communities, and OpenLiberty.org releases open source code for security and privacy.

The mission behind this alliance was to

• provide open standard and business guidelines for federated identity management spanning all network devices, • provide open and secure standard for single sign-on with decentralised authentication and open authorisation, • allow users to maintain personal information more securely, and on their terms.

The Liberty architecture is composed of 3 independent modules:

• Liberty Identity Federation Framework (ID-FF) (see Chapter 9) is the basis of Liberty Single Sign-On and Federation framework.

It enables identity federation and management through features such as identity/account linkage, simplified sign-on, and simple session management. The architecture was originally based on SAML 1.1, and has recently been integrated in SAML2.

• Liberty Identity Web Services Framework (ID-WSF) [ID-WSF11] is Liberty's federation framework for Web services, allowing providers to share users identities in a permission-based model. This framework offers features like Permission Based Attribute Sharing, Identity Service Discovery (to discover identity and attribute providers), Interaction Service (a mechanism to obtain permissions from a user) and the associated security profiles. • Liberty Identity Services Interfaces Specifications (ID-SIS) [ID-SIS10] defines service interfaces for each identity-based Web service so that providers can exchange different parts of identity in an interoperable way.

The ID-SIS is a set of specifications for interoperable services built on top of ID-WSF.

Liberty Alliance activities also included evaluation of the adoption of its specifications (see Case studies [Adoption]).

43 4.6 IETF

The Internet Engineering Task Force [IETF] is a community of international network experts. Participation is individual, but most participants are sponsored by organisations or companies: network operators, vendors, research centers, etc. There is no formal membership to IETF: anyone can register and attend IETF meetings, and anyone can subscribe to and participate in Working Group mailing lists.

IETF standards track documents are published in the Request For Comments [RFC] series, along with experimental and informational documents, and April 1 jokes.

In RFC 3935, A Mission Statement for the IETF [RFC 3935], the organization's scope is summarized as follows: “protocols and practices for which secure and scalable implementations are expected to have wide deployment and interoperation on the Internet, or to form part of the infrastructure of the Internet.”

The IETF operates Working Groups in 8 Areas, supervised by two entities: the Internet Engineering Steering Committee [IESG] and the Internet Architecture Board [IAB]. These Areas are:

• Applications (APP), including the Web • General (GEN) • Internet (INT), core technologies of Internet • Operations and Management (OPS) • Real-time Applications and Infrastructure (RAI) • Routing (RTG) • Security (SEC) • Transport (TSV)

The relevant IETF work for PrimeLife pertains to the Security [IETF SEC] and Application [Application] Areas. This includes on protocol-level security mechanisms, such as TLS, SASL, Kerberos, X.509, S/MIME, DKIM, OAuth, and work on the HTTP and URI specifications. 4.7 TCG

The Trusted Computing Group [TCG] is an industry standardisation body that aims to develop and promote an open industry standard for trusted computing hardware and software building blocks to enable more secure data storage, online business practices, and online commerce transactions while protecting privacy and individual rights.

In D.3.4.1, the overall architecture and functionalities of Trusted Computing were covered. Despite the fact that PrimeLife concentrated more on the development of XACML policy and data handling, the TCG world made significant progress in areas of interest:

To achieve an even wider distribution of Trusted Platform Modules, the TPM Specification version 1.2. received ISO blessing and is now the ISO/IEC standard 11889.

But the most important development in the past year was the success of self-encrypting devices. The key management for the devices is made possible via the TCG model and its uninterrupted key chain from the hardware to the user. The TCG Specifications [TCG

44 Storage] allow for the necessary interoperability. In the area of data protection, self- encrypting hard drives are now marketed as a way to achieve compliance to data protection rules [Data Protection]. But the solution is still based on a weak secret like a password and TCG does not address the issues around data handling that make policing data usage in highly sensitive areas like health care usable.

A further development is the use of Trusted Computing Technology together with Identity management systems like OpenID. OpenID in itself has very few to no means to secure identity and transmission of identity data. Identity theft is rather easy. For the moment, the Trustbearer [Authentication] work serves to assure client identity toward the relying party, as this looks most promising economically. But the availability of a TCG PKI infrastructure has impact on the usability of identity systems like OpenID in use cases that touch on sensitive data. Furthermore, another initiative [Trusted AC] tries to help access control with TCG technology. As they are using XACML as the basis for their access control scenarios, this is of high relevance to the PrimeLife project.

45 Chapter 5 Architectures and Frameworks

This section presents international standardization in the context of ISO/IEC relevant for the activities in PrimeLife. The PrimeLife project plan anticipates involvement in:

1. Working Group 4 (WG 4) "Integrated circuit card with contacts" of ISO/IEC JTC 1/SC 17 "Cards and personal identification" 2. Working Group 11 (WG 11) "Application of biometrics to cards and personal identification" of ISO/IEC JTC 1/SC 17 "Cards and personal identification" 3. Subcommittee 37 "Biometrics" (ISO/IEC JTC 1/SC 37) 4. Working Group 5 (WG 5) "Identity Management and Privacy Technologies" of ISO/ IEC JTC 1/SC 27 "IT Security techniques".

The past 18 months has not seen activities in SC 17/WG 4 within the goals of PrimeLife. Partner participation (GD) continues and any relevant issues will be raised within PrimeLife. There has been no partner participation in SC 17/WG 11 and SC 37 due to the lack of a qualified standardization-expert resource and other priorities for possible expert resources. Therefore this section focusses on SC 27/ WG 5. PrimeLife has established a liaison with this working group and has participated in its meetings in past 18 months. In that period PrimeLife has made significant contributions also being recognized by several National Bodies.

The terminology in this section follows that of the respective standards. 5.1 Identity Management

5.1.1 Identity Management Framework (24760)

This standard specifies a framework for secure, reliable, and private management of identity information. The framework is intended as guideline for a wide range of activities, from multinational corporation to individuals that involve identity information.

In its May 2009 meeting in Beijing the WG has appointed Eduard de Jong (GD) as Co-editor of the standard. After the May meeting the document was circulated as a first committee draft

46 incorporating a completely revised set of definitions of terms based on comments collected and submitted by the PrimeLife Liaison officer (Hans Hedbom, KAU).

For the November 2009 meeting PrimeLife contributions have focused on consolidating the emerging consensus in the working group on the terminology definitions. PrimeLife in its liaison statement proposed to split the document in multiple parts. The proposed split intends to speed up publishing of the parts that have reached consensus; early consensus seems possible on the terminology and basic concepts. This proposal was widely supported, on procedural grounds a decision on the proposal has been postponed till the next meeting.

The meeting decided to mandate the editors with a complete redrafting of the clause that describes the key concepts. Eduard de Jong (GD) lead this effort assisted with experts form New Zealand, Australia and the USA.

The document incorporating the redrafted clause has been released as second committee draft (CD2) early January 2010.

PrimeLife will continue to influence the early adoption of this standard.

5.1.2 A Framework for Access Management (29146)

This standard provides a framework for access management and the security aspects of the process to access information. It presents the life cycle of access and the security services associated with that access. Access is managed based on a policy.

Work on this standard is in an early stage (WD2) and substantial contributions will be needed to complete this draft. This standard is based on policies and it may be relevant for PrimeLife to look into possible contributions with respect to the policy languages.

5.1.3 Entity authentication assurance framework (29115)

This standard describes guidelines and principles for entity authentication. It provides a rationale for the importance of distinguishing discrete levels of assurance in an authentication decision. It provides a framework for assessing "how close" an entity’s identity is to the one that is asserted, and maintaining this throughout an identity's life cycle. This standard is a joint project with ITU-T.

Based on a contribution from the US the document has been changed considerably and has been released as a 6th WD. The changes include a modification of the title ("framework" is being added) and a change in the scope to reflect the updated structure and content of the document. 5.2 Privacy

5.2.1 Privacy Reference Framework (29100)

This standard provides a framework for privacy when handling specific information in communication technology (ICT) system addressing issues from a high-level perspective. It is general in nature and places organizational, technical, and procedural aspects in an overall privacy framework, so it should help defining privacy safeguarding requirements as they relate to PII (Personally Identifiable Information) processed by any information and

47 communication system in any jurisdiction. The framework is to be applicable on an international level and addresses system specific issues on a high-level.

It is the purpose of this international standard to provide guidance concerning information and communication system requirements for processing PII by setting a common privacy terminology, defining privacy principles when processing PII, categorising privacy features and relating all described information privacy aspects to existing security guidelines. The framework can serve as a basis for desirable additional privacy standardisation initiatives, for example for a technical reference architecture, for the implementation and use of specific privacy technologies, for an overall privacy management, for the assurance of privacy compliance for outsourced data processes, for privacy impact assessments or for specific engineering specifications.

The Privacy Framework is being developed for those individuals with an interest in the standardisation of privacy safeguarding controls as they relate to PII processed by enterprise ICT systems. This may include individuals involved in specifying, procuring, architecting, designing, developing, testing, administering and operating ICT systems. Recognising the growing need to incorporate privacy requirements and privacy safeguarding controls in system development life cycles or, more specifically, in security development life cycles this International Standard addresses the target audience of ICT system developers in a separate section, providing a framework and guidelines for an approach to built-in privacy-enhancing functionalities already during the system development.

This project is in the Committee Draft stage with the 3rd CD having been issued for voting in the beginning of 2010.

5.2.2 Privacy Reference Architecture (29101)

This project aims at providing a privacy reference architecture model that describes best practices for a consistent, technical implementation of privacy safeguarding requirements as they relate to the processing of personally identifiable information in information and communication systems. It is to cover the various stages in data life cycle management and the required privacy functionalities for PII in each data life cycle, as well as positioning the roles and responsibilities of all involved parties. The privacy reference architecture aims at presenting a best practice, privacy-enhanced architecture model and provides guidance for planning and building system architectures that facilitate the proper handling of PII across system platforms. It sets out the necessary prerequisites to allow the categorisation of data and control over specific sets of data within various data life cycles. It is the purpose of this project to provide guidance concerning a consistent and effective technical implementation of privacy safeguarding requirements within information and communication systems. Therefore it establishes a privacy reference architecture that enables system architects to build necessary privacy safeguarding measures into the system in a cohesive way across system platforms and to combine them with existing security measures, all to improve the proper handling of PII overall. Additionally, the privacy reference architecture gives best practices in advancing the use of privacy-enhancing technologies.

Interested parties that would benefit from using the concepts of the privacy reference architecture include representatives from organisations designing, developing, implementing, and operating information and communication systems. Most likely, these are representatives from various IT organisation departments such as from development, support, and operations or business units or quality assurance and data protection units that have a specific interest in

48 applying consistent architectural decisions to accomplish compliance with specific privacy requirements, rules and regulations.

This project is in the working draft stage with WD4 having been issued for commenting in the beginning of 2010.

49 Chapter 6 Policy and rule languages

6.1 eXtensible Access Control Markup Language (XACML) v3.0

XACML v3.0 [XACML v3.0] is an XML-based language for expressing and interchanging access control policies. The language offers the functionalities of most security policy languages and has standard extension points for defining new functions, data types, policy combination logic, and so on. In addition to the language, XACML defines both an architecture for the evaluation of policies and a communication protocol for message interchange. Some of the main functionalities offered by XACML can be summarised as follows.

• Policy combination. XACML provides a method for combining policies independently specified. Different entities can then define their policies on the same resource. When an access request on that resource is submitted, the system takes into consideration all the applicable policies. • Combining algorithms. Since XACML supports the definition of positive and negative authorizations, there is the need for a method for reconciling independently specified policies when their evaluation is contradictory. XACML supports different combining algorithms, each representing a way of combining multiple decisions into a single decision. • Attribute-based restrictions. XACML supports the definition of policies based on generic properties (attributes) associated with subjects (e.g., name, address, occupation) and resources (e.g., creation date, type). XACML includes some built-in operators for comparing attribute values and provides a method for adding further functions. • Policy distribution. Policies can be defined by different parties and enforced at different enforcement points. Also, XACML allows one policy to contain, or refer to, another. • Implementation independence. XACML provides an abstraction layer that isolates the policy-writer from the implementation details. This layer guarantees that different implementations operate in a consistent way, regardless of the specific implementation.

50 • Obligations. XACML provides a method for specifying actions, called obligations, which must be fulfilled in conjunction with the policy enforcement, after the access decision has been taken.

XACML also supports multiple subjects specification in a single policy, multi-valued attributes, conditions on metadata of the resources, and policy indexing.

Figure 2: XACML architecture

The above figure illustrates the XACML working and the data flow in the access control evaluation. Access control works as follows. The requester sends an access request to the Policy Enforcement Point (PEP) module (step 2) which in turn send it to the Context Handler (step 3). The Context Handler translates the original request into a canonical format, called XACML request context, and send it to the Policy Decision Point (PDP) (step 4). The PDP identifies the applicable policies among the ones stored at the Policy Administration Point (PAP) and retrieves the attributes required for the evaluation through the Context Handler (steps 5-10). If some attributes are missing, the context handler queries the Policy Information Point (PIP) module for collecting them. The PIP provides attribute values about the subject, resource, and environment. To this purpose, the PIP interacts with the subjects, resource, and environment modules. The environment module provides a set of attributes that are relevant to

51 take an authorization decision and are independent of a particular subject, resource, and action. The PDP evaluates the policies against the retrieved attributes, and returns the XACML response context to the Context Handler (step 11). The Context Handler translates the XACML response context to the native format of the PEP and returns it to the PEP together with an optional set of obligations (step 12). The PEP fulfills the obligations (step 13), and grants or denies the request according to the decision in the response context.

6.1.1 Basic XACML concepts

XACML relies on a model that provides a formal representation of access control policies and on mechanisms for their evaluation. An XACML policy contains one Policy or PolicySet root element, which is a container for other Policy or PolicySet elements. Element Policy consists of a Target, a set of Rule, an optional set of Obligation, an optional set of Advice, and a rule combining algorithm. A Target element includes a set of requests in the form of a logical expression on subjects, resources, and actions. If a request satisfies the requirements specified in the Target, the corresponding policy applies to the request. A Rule corresponds to a positive (permit) or negative (deny) authorization, depending on its effect, and may additionally include an element Condition specifying further restrictions on subjects, resources, and actions. As for element Policy, element Rule may contain a Target, Obligation, and Advice. Each condition can be defined through element Apply with attribute FunctionID denoting the XACML predicate (e.g., string-equal, integer-less-than) and with appropriate sub-elements denoting both the attribute against which the condition is evaluated and the comparison value. The rule's effect is then returned whenever the rule evaluates to true. The Obligation element specifies an action that has to be performed in conjunction with the enforcement of an authorization decision. The Advice element specifies supplemental information about a decision. Each element Policy has attribute RuleCombiningAlgID specifying how to combine the decisions of different rules to obtain a final decision of the policy evaluation (e.g., deny overrides, permit overrides, first applicable, only one applicable). According to the selected combining algorithm, the authorization decision can be permit, deny, not applicable (i.e., no applicable policies or rules can be found), or indeterminate (i.e., some information is missing for the completion of the evaluation process).

As an example of XACML policy, suppose that a hospital defines a high-level policy stating that "any user with role head physician can read the patient record for which she is designated as head physician". The XML policy below illustrates the XACML policy corresponding to this high-level policy. The policy applies to requests on the http://www.example.com/hospital/patient.xsd resource. The policy has one rule with a target that requires a read action and a subject with role head physician, and a condition that applies only if the subject is the head physician of the requested patient.

urn:example:med:schemas:record

52 head physician read

6.1.2 XACML 3.0: Privacy profile

The XACML v3.0 Privacy Policy Profile Version 1.0 is a standard issued by the OASIS group describing "a profile of XACML for expressing privacy policies" XACMLPriv v3.0 [XACML v3.0]. This profile uses the following two attributes:

• urn:oasis:names:tc:xacml:2.0:resource:purpose, which indicates the purpose for which a data resource was collected, and • urn:oasis:names:tc:xacml:2.0:action:purpose, which corresponds to the purpose for which access to a data resource was requested.

A standard rule is defined, according to which access to the requested resource is to be denied unless the two above-mentioned purposes match by regular-expression match, as shown below.

DataType="http://www.w3.org/2001/XMLSchema#string"

53 DataType="http://www.w3.org/2001/XMLSchema#string"

Such rule must be used in the scope of rule-combining algorithm urn:oasis:names:tc:xacml:2.0:rule-combining-algorithm:deny-overrides. To conform to such specification, any implementation should, as an XACML request producer, make use of the attributes above, and, as an XACML policy processor, enforce the proposed rule, respectively. The profile deals with an important aspect that can be found also in the research context of the PrimeLife project, namely, purposes of data processing. Nevertheless, purposes are a very specific facet of Data Handling, and we consider the scope of this standard proposal too narrow if compared to the much more general concepts its name, Privacy Policy Profile, refers to.

6.1.3 Current status of the XACML proposal

The latest XACML version 2.0 was ratified by OASIS standards organisation on 1 February 2005. Version 3.0 is currently in preparation.

There is a number of open source implementations of the XACML standard:

• Sun Implementation of XACML, version 2.0 Sun XACML v2.0 [Sun XACML v2.0] ◦ Full support of XACML 2.0, no support for SAML ◦ Java 2.0 Platform, Standard Edition version 1.4.0 ◦ License: BSD License • Sun Implementation of XACML, version 1.2 Sun XACML v1.2 [Sun XACML v1.2] ◦ No support for SAML ◦ Java 2.0 Platform, Standard Edition version 1.4.0 ◦ License: BSD License • Enterprise-Java-XACML from Google Code, (beta) version 0.0.14 (08/02/2008) [Enterprise-Java-XACML] ◦ It is not based on the Sun Microsystems' implementation ◦ Full support of XACML 2.0, intended support of XACML 3.0 ◦ License: Apache License 2.0 • SICSACML: XACML 3.0 [SICSACML] ◦ It is based upon the Sun Microsystems' implementation ◦ Java implementation of XACML 3.0 draft, released as patch for (1). It implements a PDP for XACML 3.0. ◦ License: BSD License

Further information about the state of XACML is available from the OASIS XACML Technical Committee's home page [OASIS XACML].

6.1.4 Relations to other proposals and to the PrimeLife project

Detailed comparisons in the literature [TREPALXACML] show that XACML provides a more comprehensive access control policy language than its most significant competitors (e.g., EPAL (see Chapter 6)). However, XACML has raised some criticism regarding its low performance because of the overhead (processing time and memory) of the XML format and the lack of an easy integration into existing entitlement engines.

The strongest advantage of XACML is its widespread adoption in industry, making it the de facto standard in access control languages. It does have a number drawbacks however with

54 respect to a number of important use cases in the PrimeLife context. First, it is built on the assumption that the user reveals all of her attributes to the server, so that the server can evaluate his policy and decide to grant or deny access; not a very privacy-friendly assumption indeed. Second, XACML can only work with fully disclosed attribute values. This makes it incompatible for use with anonymous credentials, where mere predicates over attributes are certified. Third, there is hardly any support for data handling policies, especially for the attributes disclosed by the user. The Obligations element allows a policy author to specify obligations that the local PEP has to adhere to when a rule is successfully evaluated, e.g., to log all accesses to a protected resource. However, these are obligations related to the resource hosted by the server, not to attributes of the user. Moreover, no format has been standardized for the content of obligations, so that it's not clear what can be expressed in them.

Still, because of its widespread adoption in the real world and its rich syntax in expressing functions over attributes, a large part of the policy activity in PrimeLife is centered around addressing the privacy issues in XACML. The first PrimeLife policy enigine (deliverable D5.3.1) hooked an XACML engine into the existing PRIME engine, so that XACML- protected resources could profit from the privacy features of the PRIME engine, such as support for anonymous credentials, data handling policies, and policy dialog (meaning that first the policy is communicated to the user, and only then does she reveal her attributes). The second and third PrimeLife policy engines (deliverables D5.3.2 and D5.3.3) will be a purely XACML-based engine, where the XACML language itself is extended with provisions for credential-based access control, policy dialog, and two-sided data handling policies/ preferences with automated matching. This modified XACML engine will be deployed on both the server's and the user's side, in the former case to protect its offered resources, in the latter case to protect her private data.

A more detailed discussion of the interaction between XACML and questions addressed within the project is separately included in the report of the joint W3C and PrimeLife Workshop on Access Control Application Scenarios (see Chapter 4). 6.2 The Rule Interchange Format (RIF)

The Rule Interchange Format is a family of XML languages being developed by the Rule Interchange Format [RIF] Working Group at W3C to allow computer-processable rules to be transferred between rule systems. At the point of this document (January 2010), the specifications are stable. The Working Group has issued a Call for Implementations, and hopes to advance its deliverables to W3C Recommendation status within very few months.

6.2.1 RIF Dialects

The RIF family of languages (each called a "dialect") is organised to match the different kinds of technologies used to work with rules. It starts with a common language, RIF Core, which corresponds to the shared subset (the intersection) of major rule languages. Rules written in this XML language can be translated with relative ease to nearly all other major rule languages. Simple rules can be be written in RIF Core, or translated to RIF Core, but many practical rule bases will only be transferable using some other dialect. Expressively, RIF Core is datalog (the language of positive Horn clauses without function terms), along with some primitive datatypes (such as integers and strings), and the common operations on those datatypes (such as addition and string concatenation). The syntax of RIF is designed to be quite general, and to be straightforward to parse and generate, so it is naturally verbose. A

55 part of an example rule about the perishability of items, from one of the RIF working drafts, is given here:

... item deliverydate scheduledate diffduration diffdays cpt:perishable item ...

Above RIF Core, RIF splits into two main branches, "production rules" and "logic rules". Production rules take the form "if (some condition) becomes true then do (something)", or if- condition-then-action. This is the style of rule handled by the major business rules products. Logic rules take the form "if (some condition is true) then (some other condition must be true)", or if-condition-then-condition. This is the style of rule handled by pure Prolog, by FOL logic theorem provers, and by many systems with varying degrees of acceptance over the past 40+ years. Along the logic branch, the RIF Basic Logic Dialect, or RIF BLD provides a common subset language. It extends RIF Core by adding function terms and equality, along with argument naming and membership/subclass structures.

It does not, however, include any form of negation, since logic languages diverge sharply around the types of negation they implement, primarily (monotonic) classical negation as opposed to some form of (non-monotonic) negation-as-failure. However, the language is extensible to allow creation of such dialects.

6.2.2 Use Cases

The use cases [RIF Use Cases] and applications for RIF cover much of the space of distributed information systems. Wherever there is information processing, the option of using a rule system (instead of imperative programs) arises, and for some application areas rule systems have become widely adopted. High-profile areas include credit scoring (Fair Isaac, the company behind FICO credit scores, is a major rule system vendor and a founding participant in the RIF Working Group), regulatory compliance in banking, and health care delivery. To date, most major rule systems have been closed-architecture, single-provider systems. With RIF, we begin to see the possibility of rules being developed on one system and then easily moved to another, allowing customers to avoid vendor lock-in, developing a stronger market, and encouraging more investment in long-term rule bases. Rule interoperability enables many more applications, though, when distributed systems can exchange rules. For example, vendors can publish complex, dynamic pricing structures (as rules), and then customers can (computationally) determine the most efficient purchases to initiate. Complex negotiations, with ensuing efficiencies become possible all along the supply chain, as trading partners are able to selectively expose their business logic (their rule bases) and search for synergies. In the life sciences, rule interchange can be pivotal in both research and health care delivery. In research, data integration is an enormous problem because of the

56 vast variety of medical research; effectively mining that data, to find the relevant bits of a particular task, is essential. Rule systems (and related Semantic Web technologies, like the OWL ) can greatly ease the integration effort. On the clinical side, rule systems can help physicians make diagnoses and orchestrate treatments (including detecting likely errors). Since many of these benefits increase with the scale of the market for rules (more users of rulebases, more providers of rulebases), a common interchange format should significantly improve the end benefits for research and to patients.

6.2.3 Relations to PrimeLife

RIF has regularly come up in PrimeLife discussions as an attractive candidate when expressing logic-based policies. While logic-based languages are a very promising approach towards automated checking of compliance with formulated policies, writing policies in these languages is considered a daunting task by many. Within the project, experiments with logic- based approaches are focused on the SecPal policy language. 6.3 P3P

The Platform for Privacy Preferences Project [P3P] enables Web sites to express their privacy practices in a standard format that can be retrieved automatically and interpreted easily by user agents. P3P user agents will allow users to be informed of site practices (in both machine- and human-readable formats) and to automate decision-making based on these practices when appropriate. Thus users need not read the privacy policies at every site they visit.

Although P3P provides a technical mechanism for ensuring that users can be informed about privacy policies before they release personal information, it does not provide a technical mechanism for making sure sites act according to their policies. Products implementing this P3P may provide some assistance in that regard, but that was left to specific implementations. However, P3P provides the basis for tools capable of alerting and advising the user. Wherever notices are required in laws or self-regulatory programmes, P3P can provide a very user- friendly way to provide them. In addition, P3P does not include mechanisms for transferring data or for securing personal data in transit or storage, but it can be used by tools that decide on data flow.

P3P has 3 components:

• a protocol: The P3P protocol allows user agents to discover privacy metadata about a given Web site. This is done either via a well-known location, a HTTP header, or a link-tag inside the - element of an HTML page. • a vocabulary: The P3P vocabulary allows to express data handling practices like identification of the party making the statement, retention period, secondary uses, disclosure to third parties, dispute resolution mechanisms and more. • a data schema language: The base data schema is an internationalised schema to express personal data. It is used to express the object of the statement element. This allows to have a level of detail that can go down to one policy per data item, e.g. to treat the given name different than the family name.

57 6.3.1 Status

The P3P 1.0 Specification [P3P 1.0 Spec] became a W3C Recommendation on 16 April 2002 after 5 years of intense development with numerous obstacles. The initial idea of a wallet service and a negotiation protocol [Removing Data Transfer from P3P] proved to be too ambitious and P3P was toned down to a pure policy language able to express data collection and usage services in a machine readable syntax.

The first rather rudimentary implementation was Microsoft's Internet Explorer 6.0. It only analysed the HTTP headers that contained tokens called Compact Policy [Compact Policy] in the P3P Specification. Based on the privacy practices expressed in the tokens, cookies were blocked or allowed. Many Web sites reacted quickly and installed P3P Policies. Privacy Bird [Privacy Bird] is a plug-in that uses P3P Policies to match against preferences and has a good reporting tool. Whenever there is a mismatch, the bird turns into angry red and the policy report shows exactly where the mismatch between the user's preferences and the site's policy is. The most complete tool implemented was the P3P Implementation of JRC [JRC]. This is a Java based proxy implementation, but development has stopped.

After experiences with P3P, feedback triggered two further Workshops: One on incremental improvements [Future of P3P Workshop 2002] and another on the long term vision [Future of P3P Workshop 2003]. The first workshop triggered further work on P3P 1.1 [P3P 1.1 Spec] that added user agent guidelines and improved the protocol allowing sites to identify related sites. P3P 1.0 did not say anything about the user agent. After testing, it appeared that people wanted to have a standard answer to a standard situation. The user agent guidelines reflected this. Even today, the challenge of an efficient privacy dashboard is still open, let alone the integration into the Web browsers.

But in the aftermath of 9/11 governments started massively to increase data collection. The privacy debate shifted back where it came from, notably privacy as a right against the government. This debate took near to all interest away from privacy in private services and development slowed down. P3P 1.1 [P3P 1.1 Spec] was published as a Working Group Note.

6.3.2 Conclusion

P3P work remains very important for PrimeLife as it was the first attempt to express data handling in a structured machine readable way. It shows the way forward and resulted not only in a lot of Web site implementations, but also in a flood of research publications that tried to further the approach taken by P3P. Despite the fact that it was not addressed in the P3P Specification, the use of policies for data governance was always one aspect of the work and IBM realised this approach with the Tivoli Privacy manager handling data in the backend using a P3P engine. Today, the level of P3P implementations on Web servers remains high.

PrimeLife can take advantage of the existing large scale implementation of P3P on Web sites to acquire metadata and draw conclusions. Many of the P3P challenges are still unresolved and can be further advanced by the research done in PrimeLife.

The P3P concept of a demilitarized zone to bootstrap privacy policy negotiations has been adopted in the second PrimeLife policy language (described in deliverable H5.3.2). Also, the P3P vocabulary of usage purposes was taken up as the basic purpose ontology in the same language.

58 6.4 APPEL

Note: This chapter documents a historical attempt at developing a policy language. It is unchanged from the first report on standardization.

APPEL [APPEL] specifies a language for describing collections of preferences regarding P3P policies between P3P agents. Using this language, a user can express preferences in a set of preference-rules (called a ruleset), which can then be used by the user agent to make automated or semi-automated decisions regarding the acceptability of machine-readable privacy policies from P3P enabled Web sites.

At some point in time, the P3P Specification Working Group thought that a common interchange language for specifying user preferences which is understood by all P3P implementations is a condition for user acceptance and adoption of this technology. Several other efforts have addressed a similar problem for other communities, PICS Rules, and Profiles 0.94 for example. An interchangeable format for preference rules would allow data protection professionals to disseminate minimum guidelines and default privacy protection levels to users who have neither the time nor the knowledge to create it themselves.

6.4.1 Shortcomings

In matching P3P policies, there is a huge range of options, which an agent can try to look for. APPEL gives a suggested specification for a matching algorithm and interchangeable XML rule format, which is in fact the only existing interoperable format for preference files. However, to implement a user interface to the full range of possibilities within APPEL results in an extremely complex interface. Only one utility was ever built, designed as part of the JRC P3P project tried to come up with an interface. This interface proved to be even too complex for experts.

Additionally, APPEL had some unpredictable behaviour as a P3P policy could be written in two different ways but having the same meaning and the APPEL rule would lead to the same unexpected results. This was due to the fact that APPEL was the first trial to construct a rule language based on RDF technologies. There were suggestions to improve the user interface by a move of P3P to a formalised ontology, but this was not taken up by the P3P Specification Working Group due to a lack of support from the community and the implementers. As an endpoint APPEL was published as a Working Group Note on 15 April 2002. It is used in most P3P implementations (e.g. [Privacy Bird]) as an import format and subsequently translated into the internal format of the application.

The world moved on. There was a lot of interest in APPEL and rules in general as part of the Semantic Web [Semantic Web] technology. The engineers realised that they needed a new clean approach that is not tight to privacy only. After long scientific discussions, W3C chartered the Rule Interchange Format (RIF) Working Group [RIF] that is still running. A privacy ontology was developed by the PRIME Project [PRIME]. RIF is only a framework for rulesets. There is the option for PrimeLife to combine the privacy ontology developed by PRIME with the framework set by RIF in order to develop a new RIF-compliant rule language. The PRIME obligation language is an obvious candidate to learn more and to refine the approach to rules.

59 6.4.2 Conclusion

APPEL is a very important historical step towards rule languages and was slightly premature. APPEL helps to understand the very fundamental issues concerning semantics and syntax raised by a privacy preference language. But it is of no use to take up APPEL as is as there are new languages and technologies out there to be used to have a much cleaner approach to privacy rules.

Still, the PrimeLife project did pick up on the basic concept of having privacy preferences, i.e., a machine-interpretable language in which users can express how they expect their information to be treated. The second PrimeLife policy engine (deliverable D5.3.2) features an expressive preferences language that can be automatically matched against a server's proposed policy, thereby assisting the users in the evaluation of privacy policies. 6.5 Enterprise Privacy Authorisation Language (EPAL)

Enterprise Privacy Authorisation Language [EPAL] is a framework for managing collected personal data that aims at enabling enterprises to formalise and enforce privacy practices.

Privacy practice prescriptions are embodied in the form of a policy.

6.5.1 Structure of an EPAL policy

An EPAL policy is given in the form of a XML data structure and includes three main sections.

• The Policy Information identifies the policy providing information about the Issuer, the Version Number, the Start Date, the End Date, the Replacement Policy Name, the Replacement Policy Version. • A Vocabulary Reference provides a pointer in the form of an URI to an EPAL vocabulary that describes all the components that can be used in the rules in the following section. These components deal with the entities typically involved in a transaction: Data Users, Data Categories, Actions, Purposes, Conditions, Obligations, Context Models. • The Ruling Set includes the rules that define whether a Data User is allowed or denied to perform an Action on a Data Category for a certain Purpose under specific Conditions. The rules are ordered with descending precedence, that is, if a rule applies (i.e. allows or denies a request), subsequent rules are ignored. To be applicable to a request, a rule must include the same elements (such as Data Users, Actions...) as in the request and all of its Conditions must be met.

Every EPAL policy is characterised by an optional Global Condition and by a mandatory Default Ruling. The Default Ruling ("allow", "deny, or "not-applicable") is returned as a result of the policy evaluation if the Global Condition is false or no rule within the policy is applicable.

6.5.2 An EPAL policy example

A simplified example of an EPAL policy follows:

60 Guenter... IBM Research ... This condition is true if the data-subject is a child according to COPPA (i.e., age<=13). 13 Parent consent for collection. true

The policy above allows to collect children's data and store it provided parent consent has been given. All elements whose definition is not provided in this epal-policy element are defined in vocabularies to which references are provided in the policy itself.

61 6.5.3 An EPAL query example

A typical EPAL authorisation query looks like follows. This is an example to which the policy above applies.

0123456789

The refid attribute of the Container element refers to the container of the policy that is instantiated for the authorisation evaluation. Moreover, it contains one or more attribute elements with the actual attribute values to be used to evaluate the relevant conditions. Some policies with an Obligation element, may also state that when a certain access is allowed, some specific additional steps must be taken by the requestor.

6.5.4 A typical EPAL scenario

In a typical EPAL scenario, a customer of the enterprise views the privacy policy statement (specified e.g. with P3P (see Chapter 6)), accepts it and sends in personal identifiable information data. The consent and the relevant policy are logged, and the privacy management enforcement monitors ensure that only data accesses by the enterprise's employees are allowed that conform to the privacy policy.

6.5.5 Status of the EPAL proposal

EPAL was submitted to W3C in 2003 for consideration as a privacy policy language standard. Due to a number of legal issues, development has since come to a halt.

6.5.6 Relations to other standards and to the PrimeLife project

As illustrated above, EPAL provides the possibility of an integration with the P3P standard, which can be used to receive privacy policy preferences from end users that are then stored and enforced in an enterprise's IT system relying on EPAL. However, this approach should not be taken as the sole possible deployment mode for EPAL: The policy language has greater expressivity, some of which isn't used when the language interfaces with P3P.

An important contribution from this proposal is the focus on the issue of enforcement, through the PEP abstraction. This idea was picked up by the policy activity of PrimeLife, in that the policy engine features not only an automated matching engine to match privacy preferences and policies, but also an automated obligation enforcement engine to ensure that the resulting sticky policy is adhered to. Also, the concept of user-extensible vocabularies was taken up by PrimeLife: as a privacy research project, we realize we do not have the expertise to define for all possible industries the final and exhaustive lists of usage purposes, actions, resource categories, etc. The design of the second PrimeLife policy language (described in deliverable H5.3.2) is therefore kept open to such extensions.

62 6.6 CARML

CARML (Client Attribute Requirements Markup Language) is a specification language that aims at defining application identity requirements, that is, what identity information an application needs and how the application will use it.

A CARML document is an XML document that enables applications working on identity- related data to declare their data requirements and intended usage of personally identifiable information.

The CARML specification provides a standard way for a client application to request for data from a service provider, but it does not guarantee that the client application will receive what is requested.

It is assumed that applications only require a fixed set of interactions dealing with identity data, as follows:

• information about a user is looked up by means of one or more indexes. The information typically consists of a subject name derived by an authentication process, or an attribute (e.g. social security number) • some tests are performed to check whether a subject has a property (with unknown value), or a property with a value that will be known at runtime, or a property with a fixed value • the values for a sequence of named properties are retrieved • some attributes in the form of name/value pairs associated with a subject can be modified.

CARML allows for the definition of a localised data dictionary for applications, but it is mostly expected that developer tools using CARML would promote the usage of schemata and dictionaries already specified by an enterprise.

6.6.1 Elements of the CARML language

CARML relies on the SAML syntax as described in SAML V2.0 [SAML V2.0] for several of its elements.

For subject indexes, a SubjectIndexes element is introduced, including one IndexNameIdentifier element or one or more IndexAttribute elements. In the following example, the client declares that a pair of values (e-mail address and Country) will be used as indexes, the client will provide an e-mail address and the service provider is to assume that a static attribute of Country="US" is to be used:

urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress US

63 When an application needs to send boolean questions to a service provider, those questions are specified in the form of Property elements. Property elements may include a description that aids the identity service managers in defining the appropriate values. The following example includes a sequence of properties: the first asks the question whether the user has property AboveEighteen, the second states that an EmploymentLevel will be provided at runtime and the third that the application wishes to check whether the user has a Department property and whether it is set to value Information Technology.

Information Technology

Finally, CARML defines an Attribute element that lists the attributes (specified on the basis of the SAML attribute schema) the client application is requesting to be returned with the subject by the service provider, like in the following example:

For each attribute-related request, the reply may include no values, one or more values, or an exception indicating unauthorised requests or filtering due to policy conditions. Attribute declarations can include a Modifiable permission flag to enable users to modify the relevant values, as in the following example, in which Language is modifiable, while Country is read- only:

SubjectIndexes, Property, and Attribute elements are included in a NamedInteraction element. An IRData element contains one or more NamedInteraction elements.

6.6.2 Status of the CARML proposal

The latest working draft on CARML was published in 2006.

64 6.6.3 AAPML

AAPML (Attribute Authority Policy Markup Language) is an XACML (see Chapter 5) profile which aims at enabling owners of identity-related data to specify in the form of policies the conditions under which information in their control may be used by other applications.

6.6.4 Relations to PrimeLife

CARML was considered in the PrimeLife policy activity, especially because of its capability to express properties over attributes, rather than complete attribute values. This is a concept that is very important in order to fully leverage the power of anonymous credentials. The XACML-based engines developed within PrimeLife support the same concept, albeit not by using CARML itself, but rather by adhering as much as possible to the original XACML syntax. 6.7 Identity Governance Framework

The Identity Governance Framework [IGF] is an open initiative which aims at tackling the issues related to the management of identity related information across enterprise IT systems. This initiative includes proposals of specifications for a common framework to define usage policies (see [AAPML Spec]), attribute requirements (see [CARML Spec]), and the relevant developer APIs. These proposals are meant to enable businesses to guarantee documentation, control, and auditing with respect to acquirement, use, storage, and propagation of identity- related data through applications and systems.

Oracle announced the initiative together with the founding participants (Computer Associates, Layer 7 Technologies, HP, Novell, Ping Identity, Securent, Sun Microsystems) in November 2006. The initiative was submitted royalty-free in February 2007 to the Liberty Alliance consortium, which aims at building a more trusted Internet by addressing the technology, business and privacy aspects of digital identity management and whose Management Board includes representatives from AOL, Ericsson, Fidelity Investments, France Telecom, HP, Intel, Novell, NTT, Oracle, and Sun Microsystems. 6.8 PRIME Policy Languages

The PRIME project is a large-scale research effort aimed at developing an identity management system able to protect user personal information and to provide a framework that can be smoothly integrated with current architectures and online services. In this context an important service for helping users to keep control over their personal information is represented by access control solutions enriched with the ability to support privacy requirements. To fully address the requirements posed by a privacy-aware access control system, the following different types of privacy policies have been defined in the context of PRIME.

• Access control policies govern access/release of data/services managed by the party (as in traditional access control). Access control policies define authorisation rules concerning access to data/services. Authorisations correspond to traditional (positive) rules usually enforced in access control systems. An access control rule is an expression of the form:

65 with [] can on with [] for if [] • Release policies govern release of properties/credentials/personal identifiable information (PII) of the party and specify under which conditions they can be released. Release policies define the party’s preferences regarding the release of its PII by specifying to which party, for which purpose/action, and under which conditions a particular set of PII can be released. Although different in semantic access control and release policies share the same syntax. • Data handling policies define how personal information will be (or should be) dealt with at the receiving parties. Data handling policies regulate how PII will be handled at the receiving parties (e.g., information collected through an online service may be combined with information gathered by other services for commercial purposes). Users exploit these policies to define restrictions on secondary use of their personal information. In this way, users can manage the information also after its release. Data handling policies will be attached to the PII or data they protect, and transferred as sticky policies to the counterparts. A DHP rule is an expression of the form: can for if [] provided [] follow []

A prototype providing functionalities for integrating access control, release and data handling policies evaluation and enforcement has been developed in the context of the PRIME project.

6.8.1 Rough use cases

The reference scenario is a distributed infrastructure that includes three parties: i) users are human entities that request online services; ii) service provider is the entity that provides online services to users and collects personal information before granting an access to its services; iii) external parties are entities (e.g., business partners) to which the service provider may want to share or trade personal information of users. The functionalities offered by a service provider are defined by a set of objects/services. This scenario considers a user that needs to access a service. The user can be registered and characterised by a unique user identifier (user id, for short) or, when registration is not mandatory, characterised by a persistent user identifier (pseudonym). Three major use cases are listed in the following.

• E-commerce. A major factor in the evolution of the Web has been the widespread diffusion of e-commerce, that is, the ability to purchase, sell, and distribute goods and services to customers. A primary concern in the development of e-commerce has been to provide a secure global infrastructure through solutions for secure data exchange and systems for protecting e-services from unauthorised accesses. However, in the last years, the focus is shifted from the protection of server-side resources to the protection of user privacy. If users do not have confidence that their private data are managed in a privacy-oriented way by the server, they will refuse participation in e-commerce. In this scenario, it is mandatory to provide users with the possibility of protecting their privacy and their sensitive data, still accessing the online services. • Online healthcare system. Healthcare systems support interactions among patients, medical and emergency personnel, insurance companies, and pharmacies. These systems allow for anonymous access to general information and advice, and enforce access control to individual patient records according to general rules, context (e.g., treatment, emergency), and the patient’s specific choices (e.g., primary care physician, health insurance). In this context, it is important to ensure to the patients enhanced privacy functionalities to define restrictions regulating access and management of their data.

66 • Location-Based Service (LBS). Technical improvements of location technologies permit to gather location information with high accuracy and reliability. Physical location of individuals is then rapidly becoming easily available as a class of personal information that can be processed for providing a new wave of online and mobile services, such as location-based access control (LBAC) services. In addition to LBAC services, many mobile network providers offer a variety of location-based services such as point of interests proximity, friend-finder, or location information transfer in case of an accident (e.g., 112 european emergency number). Such services naturally raise privacy concerns. Users consider their physical location and movements as highly privacy sensitive, and demand for solutions able to protect such an information in a variety of environments.

6.8.2 Distinctive features of PRIME languages

Access Control/Release Model and Language

• XML-based syntax. The language provides an XML-based syntax for the definition of powerful and interoperable access control and release policies. • Attribute-based restrictions. The language supports the definition of powerful and expressive policies based on properties (attributes) associated with subjects and objects. • Credential definition and integration. The language supports requests for certified data, issued and signed by authorities trusted for making the statement, and uncertified data, signed by the owner itself. • Anonymous credentials support. The language supports definition of conditions that can be satisfied by means of zero-knowledge proof. • Support for context-based conditions and metadata. The language allows the definition of conditions based on physical position of the users and context information, and integration with metadata identifying and possibly describing entities of interest. • Ontology integration. Policy definition is fully integrated with subject and object ontology in defining access control restrictions. Also, the language takes advantages from the integration with credentials ontology that represents relationships among attributes and credentials. • Interchangeable policy format. Parties need to specify protection requirements on the data they make available using a format both human- and machine-readable, easy to inspect and interchange. • Interactive enforcement. Rather than providing a simple yes or no decision, policy evaluation provides a way of interactively applying criteria to retrieve the correct sensitive information, possibly managing complex user interactions such as the acceptance of written agreements and/or online payment. • Variables support. Currently, access control/release language supports two placeholders, one for the subject and one for the object. This solution represents a good trade-off between expressivity and simplicity but can be easily extended to support variables definition.

Data Handling Model and Language

• Attribute-based restrictions and XML-based syntax. As for access control/release language, data handling language supports the definition of powerful and expressive XML-based policies based on properties associated with subjects and objects.

67 • Customised policies. Data handling policies are defined through a negotiation between the user and the service provider. When a user requires a service, predefined policy templates are provided by the service provider as a starting point for creating data handling policies. The templates are then customised to meet different privacy requirements. A user can directly customise the templates or it can be supported by a customisation process that automatically applies some user privacy preferences. If the customised data handling policies will be accepted by the service provider, the personal information provided by the user will be labeled with the customised data handling policies. This represents the most flexible and balanced strategy for the definition of data handling policies. • Stand-alone policies. Data handling policies are defined as independent rules. Personal data are then tagged with such data handling policies, which physically follows the data when they are released to an external party, thus building a chain of control coming from the data owner.

6.8.3 Relation to standards

XACML v2.0

XACML version 2.0 was ratified by OASIS standards organisation on 1 February 2005. Similarly to PRIME languages, XACML proposes a XML-based language allowing the specification of attribute-based restrictions. Main differences with PRIME languages are as follows.

• XACML does not explicitly support privacy features. • Although XACML supports digital credentials exchange, it does not provide request for certified credentials. • XACML does not support and integrate location-based conditions and ontologies.

P3P/APPEL

P3P allows Web sites to declare their privacy practices in a standard and machine-readable XML format. Designed to address the need of users to assess that the privacy practices adopted by a server provider comply with their privacy requirements, P3P has been developed by the World Wide Web Consortium (W3C). Users specify their privacy preferences through a policy language, called A P3P Preference Exchange Language (APPEL), and enforce privacy protection by means of an agent. Similarly to PRIME languages, P3P proposes a XML-based language for regulating secondary use of data disclosed for the purpose of access control enforcement. It provides restrictions on the recipients, on the data retention and on purposes. Main differences with PRIME languages are as follows.

• P3P does not support privacy practices negotiation. In fact, users can only accept the server privacy practices or stop the transaction. The opt-in/opt-out mechanisms result in limitations. • P3P does not support definition of policies based on attributes of the recipients. • P3P does not provide protection against chains of releases (i.e., releases to third parties).

68 6.8.4 Relation to PrimeLife

The PRIME language had a very strong influence on the policy activities of PrimeLife, if only because of the considerable overlap in participants between the PRIME and PrimeLife projects. The first PrimeLife policy engine, described in deliverable D5.3.1, is a hybrid of the PRIME policy engine and an XACML engine, to take advantage of the privacy features of the PRIME engine and the expressive access control features of XACML. 6.9 WS-Policy

WS-Policy defines a framework for expressing the capabilities and the requirements of entities in the form of policies in XML-based Web service systems.

A policy is defined as a collection of one or more policy assertions. These policy assertions may deal with lower-level communication properties, like which transport protocol is to be used, whether an authentication scheme should be maintained, as well as higher-level service characteristics, such as a privacy policy or Quality of Service (QoS). The semantics of individual assertions (ranging from simple strings to complex combinations of items with attributes) is domain-dependent and lies beyond the scope of the WS-Policy framework, which treats them as opaque. WS-Policy aims at providing constructs to indicate how a policy assertion or a set of assertions apply in a Web services environment.

6.9.1 Structure of a policy

A policy is built from individual assertions grouped by policy operators, which are specific XML tags with a wsp prefix referring to the namespace http://www.w3.org/ns/ws- policy of the WS-Policy specification [WS-Policy 1.5]. Two operators in the form of XML tags are used to make statements about policy combinations: wsp:ExactlyOne asserts that only one child node must be satisfied; wsp:All asserts that all child nodes must be satisfied.

Here is an example of a policy:

The sp prefix corresponds to the namespace relevant to the domain in which the assertions are defined (i.e.: WS-SecurityPolicy). As different assertions are inserted in an ExactlyOne operator, here we have different alternatives. A Web service invocation must then use one of the asserted algorithm suites to comply with, or support the policy above. More generally, a policy assertion made by a service provider is supported by a service requester when the requester satisfies the requirement expressed by the assertion; a policy alternative is supported by a requester when all the assertions in the alternative are supported; finally, a policy is supported when the requester supports at least one of the alternatives in the policy.

The flexible syntax might lead to the formulation of rather complex policies.

69 6.9.2 Normal form of a policy

A normal form is defined to simplify the manipulation and clarify the understanding of policies. The structure of a normal form reflects the steps the basic policy processing operation is comprised of: all the alternatives are enumerated within a wsp:ExactlyOne operator, and each alternative enumerates in turn all the assertions in a wsp:All operator.

The policy above in a normal form would then be as follows:

6.9.3 Compact form of a policy

As policies in a normal form can be very verbose, constructs are provided to express them in a compact form. If required by the policy processing software, the normal form can always be obtained by means of a recursive procedure which is illustrated in the WS-Policy specification.

Compact expression of policies may be achieved in different ways, such as optional assertions, policy assertion nestings, policy operator nestings, and policy inclusions.

The WS-Policy specification provides an optional attribute to indicate that a policy assertion is optional. This is semantically equivalent to express two policy alternatives with and without the assertion, respectively, so that

... is equivalent to

...

Setting the value of this attribute to false is equivalent to omitting the attribute and thus having a mandatory assertion.

6.9.4 Nested policies

The WS-Policy syntax allows for an assertion to include a nested policy expression, as in the following schema:

... ...

70 ...

Policy assertions with nested policy expressions are normalised recursively.

Policies can be compactly expressed also by nesting operators wsp:Policy, wsp:All, and wsp:ExactlyOne within one another. Inference rules exploiting these operators' properties (e.g.: commutativity, associativity, idempotency, and distributivity) allow for the transformation of compact expressions into normal form.

For instance, wsp:All distributes over wsp:ExactlyOne, so that

is equivalent to

6.9.5 References to other policies

The WS-Policy allows for sharing of assertions across different policy expressions, by means of the wsp:PolicyReference element. This element comes with an URI attribute that provides a reference to a policy expression. For policy expressions within the same XML document, the reference should point toward an expression identified by an ID, while for external policy expressions there is no requirement that the reference be resolvable, as retrieval mechanisms are external to the WS-Policy specification. When a wsp:PolicyReference element refers to a wsp:Policy element, it prescribes to replace the wsp:PolicyReference element with a wsp:All element whose children are the same as the children of the referenced policy. In other words, when processed, a reference element is substituted by the referenced policy, wrapped in a wsp:All element. Here follows an example with a reference within a document.

xmlns:wsu= "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" wsu:ID="Protection" >

The specification prescribes that a policy must not refer to itself either directly or indirectly be means of a wsp:PolicyReference element.

71 6.9.6 Intersection of policies

Generally, and more specifically also in the context of Web services, interactions can take place only when all involved parties agree on at least one policy alternative.

The WS-Policy specification allows for an intersection process that compares two policies looking for common, or mutually compatible, alternatives. As the semantics of the compared assertions lies beyond the scope of the domain-neutral WS-Policy framework, domain knowledge is clearly required to complete the intersection process.

Thus, in this specification only an algorithm is provided that approximates compatibility in a domain-independent fashion. The two policies are first normalised.

In a first comparison phase, each alternative from the first policy is compared to all alternatives from the second policy vocabulary-wise: to be compatible, two alternatives must at least share the same vocabulary, that is, they must include assertions of the same type. Two alternatives with different vocabularies are considered incompatible and discarded in this phase.

The intersection of two compatible alternatives is an alternative which contains all the assertions from both intersected alternatives. If policy A contains an alternative which is compatible with an alternative from policy B, then the two policies are compatible, and their intersection is built as the set of the intersection between all pairs of compatible alternatives (by choosing one alternative from each policy, of course).

In the following example, policy P1 and policy P2 present only one pair of compatible alternatives (alternative A2 from P1 and alternative A3 from P2), whose assertions constitute the intersected policy:

Policy P1:

/S:Envelope/S:Body /S:Envelope/S:Body

Policy P2:

72 /S:Envelope/S:Body

Intersection of policies P1 and P2:

The interpretation of the combined alternatives is beyond the scope of this specification, as it strongly depends on the assertions' semantics. The rationalisation of the new, intersected alternatives into something meaningful to the infrastructure requires domain knowledge. The new policy might require further processing that might show that an alternative is contradictory, and thus not meaningful to the underlying infrastructure that is responsible for performing the required behaviours. In such a case the invocation of the intersection function fails.

6.9.7 Relations to other proposals and to the PrimeLife project

WS-Policy provides a framework for expressing alternative sets of policy assertions from various domains, such as security, and reliable messaging, that are supported by a service. Still, there are no WS-Policy assertions defined for authorization, access control, or privacy policies. WS-XACML (see Chapter 6) is a proposal aiming at fulfilling this need.

WS-Policy offers an example of abstraction that separates the domain-dependent assertions from the domain-independent framework of policy management which should work as a significant guideline for PrimeLife research.

73 6.9.8 Status of the WS-Policy proposal

WS-Policy became a W3C Recommendation in September 2007, standardized by the W3C Web Services Policy Working Group [WS-Policy WG]. The WS Policy Primer [WS-Policy Primer] is available as a companion document. 6.10 Rein

Rein (Rei and N3 [Rein]) is not a policy language per se, but it defines a framework for representing and reasoning over policies in the Semantic Web. It combines Rei [Rei] concepts for policies and N3 logic to reason about the policies. The Rei policy language is expressed in OWL-Lite and the associated engine uses domain knowledge described in RDF. The policies are constraints over allowable and obligated actions on resources in the environment.

The entities involved need to be defined in terms of one other using their Uniform Resource Identifiers (URI). Rein policy networks are described using Rein ontologies and these distributed but linked descriptions are collected off the Web by the Rein engine and are reasoned over to provide policy decisions. The Rein engine allows to have any other policy language, since it has a concept of meta policy. It provides the ontologies for describing a policy network and mechanisms for reasoning. However, the policy language has to be defined in RDF Schema or OWL to be understood by the engine. Other rule languages may be supported in the future.

6.10.1 Relations to other standards and to the PrimeLife project

Rein was developed as part of the Transparent Accountable Data Mining Initiative (TAMI) project of the Decentralized Information Group (DIG) researchers. They have pursued work in the area of Policy-Based Reasoning for Information Handling Control in Decentralized (Web) Environments. The same group also started work in the Social Web Privacy [SocialWebPrivacy] field, including an ontology for privacy in a social web environment. 6.11 OASIS WS-XACML

WS-XACML [WS-XACML v1.0] is a Web service profile that defines a standard way to use XACML for authorisation, access control, and privacy policy in a Web Services environment. The document is currently an OASIS working draft.

The profile describes how client and service can match mutually their requirements and capabilities. WS-XACML defines a new XACML assertion to express them. Requirements are expectations that each side has of the communication peer. Capabilities define what each side can fulfil. Hence, we have four sets of assertions: capabilities and requirements both for client side and the service side. WS-XACML offers a basic matching algorithm to match these four sets of policy. It mutually matches the requirements of the client side with the capabilities of the services and vice versa the requirements of the service with the capabilities of the client. The standard gives some very interesting examples which fit well with planned work in PrimeLife. The service could require a particular authorisation attribute; the client can avoid trying to invoke the service if it does not possess this attribute. An extension of this example is that the service accepts only attribute claims which are signed by a trusted third party. The service could demand that the client is acting in a particular role; the service knows this in advance and can initiate the role switch before invoking the service. Of course also the

74 client could impose obligations on the service. The standard mentions an example policy that demands that the service does not share private information which a client has to submit when invoking this service. Given that a client can choose between various instances of a particular service, it now can decide which one is willing to fulfil the requirements best. In a more flexible scenario one service could offer various resources coming with different privacy policies. Although WS-XACML does not offer a pattern to negotiate between client and service, it is easily imaginable to build such a negotiation on top of WS-XACML. WS- XACML is particularly useful in the scope of P3P privacy policies. It allows expressing P3P policy preferences and matching them using the new assertion for requirements and capabilities.

In summary, WS-XACML addresses an important practical issue of XACML. XACML itself defines just the assertions and how to combine them. The WS-XACML draft standard offers a profile how to use, compare and match them on both sides of a service interaction. However, currently we see four shortcomings of WS-XACML: it is not yet an official standard, it lacks a negotiation protocol to balance requirements and capabilities, the matching algorithms are very basic, and to the best of our knowledge there is not a single implementation available. 6.12 Security Policy Assertion Language (SecPAL)

SecPAL [Becker09] is a declarative security policy language developed to meet the access control requirements of large-scale Grid Computing Environments. It is a declarative, logic- based, language that builds on a large body of work showing the value of such languages for flexibly expressing security policies. It was designed to be comprehensive and provides a uniform mechanism for expressing trust relationships, authorisation policies, delegation policies, identity and attribute assertions, capability assertions, revocations, and audit requirements. This provides tangible benefits by making the system understandable and analysable. It also improves security assurance by avoiding, or at least severely curtailing, the need for semantic translation and reconciliation between disparate security technologies.

Becker et al., 2009 [BeMaBu09] extended SecPAL in a way that it allows to specify "both users' preferences on how their personally identifiable information (PII) should be treated by data-collecting services, and services' policies on treating collected PIIs". This is highly relevant for Activity 5 in PrimeLife, since it supports the same policy langauage at the user's side, at the service's side, and in complex downstream data useage scenarios. Experiments with these extensions of SecPAL are also being performed within the policy research activities of PrimeLife. 6.13 PrimeLife Policy Engines

PrimeLife recognized that because of its wide-spread adoption in industry, XACML would be a good starting point to bring the PrimeLife principles to the real world. As a first step, a policy engine was developed that integrates the Sun XACML engine and the PRIME policy engine, so that policies expressed in XACML can profit from the support for Idemix credentials and data handling policies in the PRIME language. The design of the first engine was described in heartbeat H5.3.1, the engine itself was delivered as D5.3.1. In a second step, PrimeLife will develop a purely XACML-based engine, where extensions to the XACML language itself are made to add support for credential-based access control, sanitized policy dialog, two-sided data handling policies (i.e., policies and preferences) with automated matching, and downstream usage control (i.e., restrictions on data sharing with third parties). The language and architecture extensions for the second engine have been described in

75 heartbeat H5.3.2, the implementation of the engine will be delivered as D5.3.2 in the third year of the project. We briefly describe the features of both engines here.

6.13.1 First PrimeLife Policy Engine

The first PrimeLife Policy Engine is the result of the work done in the PrimeLife Work Package 5.3, whose ultimate goal is to provide an effective and easily deployable privacy- aware policy solution, to be adopted in current information systems with a minimal impact on the existing technologies. The policy engine extends and integrates XACML (eXtensible Access Control Markup Language) [XACML v2.0] with a privacy-aware system for complementing it with privacy functionalities. The PRIME (Privacy and Identity Management for Europe) architecture [PRIME] has been chosen for this integration. The motivation behind the integration of XACML with PRIME is that XACML is a well-known standard that represents today the most effective and accepted solution for controlling access in distributed environments. The PRIME architecture and system provide instead advanced privacy functionalities (e.g., negotiation among parties for establishing the least set of information that a user has to disclose to access a specific service; data handling policies PRIME policies [PRIME policies] for regulating the secondary use of the information collected) but its impact on the real world is limited by the fact that the PRIME adoption in existing businesses would require important changes to the legacy systems. The main goal of this work is then to build a flexible framework that exploits the advantages of XACML in terms of access control, flexibility, and scalability, and the advantages of PRIME in terms of data protection and privacy. The result is the design and implementation of a privacy-aware XACML engine, including support for dialog between parties, incremental release of data, and data handling, with minimal changes to the XACML architecture.

The prototype has been implemented in Java and extends and integrates the last version of the PRIME integrated prototype [PRIME] and the SUN implementation [SUN XACML] of the XACML engine released under a BSD-like SUN license. In the implemented prototype, the access control language defined within PRIME has been extended to support the XACML policies. A PrimeLife policy is obtained by inserting into the existing element of the PRIME policy language, an element. That element in turn contains an XACML policy. It is important to note that the XACML policy does not need to be modified to be included into a PrimeLife policy. Therefore, the standard XACML language (supported by the SUN XACML engine) is used in policy definition, with the exception of the attribute identifiers. In fact, to be able to exploit the features offered by the PRIME components, XACML policies must use PRIME URIs to identify attributes (e.g., https://www.prime-project.eu/ont/PIIBase#firstname identifies attribute first name). For a detailed description of the PRIME language and of the XACML language, we refer the reader to PRIME policies [PRIME policies] and XACML v2.0 [XACML v2.0], respectively.

In summary, the first PrimeLife policy engine shows how a solution to access control can be integrated with a more complete solution to privacy and identity management. This implementation can serve as a guideline for future works, towards a complete privacy-aware access control system that will extend XACML to offer an extensive support for the management of privacy requirements.

6.13.2 Second PrimeLife Policy Engine

The second PrimeLife Policy Engine will extend the XACML 3.0 policy language with data handling and credential-based access control capabilities. We maintain the overall structure of

76 the XACML language, but introduced a number of new elements to support the advanced features that our language has to offer, and we also modify the schema of a number of existing elements. The language is intended to be used by the Data Controller to specify the access restrictions to the resources that he offers; by the Data Subject to specify access restrictions to her personal information, and how she wants her information to be treated by the Data Controller afterwards; by the Data Controller to specify how “implicitly” collected personal information (such as IP address, connection time, etc.) will be treated; and by the Data Subject to specify how it wants this implicit information to be treated.

A paper focusing on the extensions that we make to XACML [APD+09] was presented at the joint PrimeLife and W3C Workshop on Access Control Application Scenarios (see Chapter 4). The PrimeLife team is currently enhancing the gained policy language results with ideas on how to build deployable privacy-preserving access control systems based on industry- accepted standards. To this end, it is investigated how the standardized Security Assertion Markup Language (SAML) can be extended for being used as description language for credential proofs, and how the XACML architecture and communication model have to be extended for incorporating these proof descriptions in the evaluation of credential-based access control policies.

The extensions proposed to make both XACML and SAML credential-aware were authored in the style of existing XACML and SAML profiles, respectively. The reason being that the extensions shall be submitted to the standardization committees for evaluation in case the concepts proof as practical and sound.

77 Chapter 7 Authentication Infrastructure

7.1 The ITU-T X.509 Standard

X.509, also known as ISO/IEC 9594, is an ITU-T that represents the normative standard for Public Key Infrastructure (PKI). It addresses some of the security requirements in the areas of authentication and other security services through the provision of a set of frameworks upon which full services can be based.

Specifically, this standard defines frameworks for:

• Public-key certificates • Attribute certificates • Authentication services

The public-key certificate framework defined in this standard includes a definition of the information objects for Public Key Infrastructure (PKI) and Certificate Revocation List (CRL).

The attribute certificate framework defines the information objects for Privilege Management Infrastructure (PMI) and Attribute Certificate Revocation List (ACRL). This specification provides, for instance, the framework for issuing, managing, using and revoking certificates; an extensibility mechanism that defines formats for both certificate types and for all revocation list schemes; a set of standard extensions which is expected to be generally useful across a number of applications of PKI and PMI, and, schema components such as object classes, attribute types and matching rules for storing PKI and PMI objects in the directory. Beyond these frameworks other elements of PKI and PMI are expected to be defined by other standards bodies (e.g. ISO TC 68, IETF), such as key and certificate management protocols, operational protocols, additional certificate and CRL extensions.

The authentication scheme defined in this standard is generic and may be applied to a variety of applications and environments. The directory makes use of public-key and attribute certificates, and the framework for using these facilities is also defined in the standard. Public-key technology, including certificates, is used by the directory to enable strong

78 authentication, signed and/or encrypted operations, and for storage of signed and/or encrypted data in the directory. Attribute certificates can be used by the directory to enable rule-based access control. Although the framework for these is provided in this specification, the full definition of the directory's use of these frameworks, and the associated services provided by the directory and its components is supplied in the complete set of directory specifications.

This standard, in the authentication services framework, also:

• specifies the form of authentication information held by the directory; • describes how authentication information may be obtained from the directory; • states the assumptions made about how authentication information is formed and placed in the directory; • defines three ways in which applications may use this authentication information to perform authentication and describes how other security services may be supported by authentication.

It describes two levels of authentication: simple authentication, using a password as a verification of claimed identity; and strong authentication, involving credentials formed using cryptographic techniques. While simple authentication offers some limited protection against unauthorised access, only strong authentication should be used as the basis for providing secure services. It is not intended to establish this as a general framework for authentication, but it can be of general use for applications which consider these techniques adequate.

7.1.1 X.509 Certificate and Certification Process

X.509 is part of the hierarchical X.500 standard and thus assumes a strict hierarchical system of certificate authorities (CAs) for issuing the certificates. This is in contrast to Web of trust models, like PGP, where anyone (not just special CAs) may sign (and thus attest to the validity) of others' key certificates. X.500 standard defines how global directories should be structured. Directories are a way to organise a database, a sort of evolution of a phone book. X.500 directories are hierarchical with different levels for each category of information, such as country, state, and city.

The X.500 standard was intended to give a worldwide unique identifier structure. The X.500 system has never been fully implemented, so the IETF's public-key infrastructure Working Group has made extensive updates to the standard in order to make it work with the more loose organisation of the Internet. In the X.509 system, a CA issues a certificate binding a public key to a particular name. This name is the Distinguished Name defined by X.500. Depending on the issuing authority, the binding can be between a public key and an e-mail address or a DNS-entry.

Root certificates can be issued to all employees by an organisation so that all employees can use the company PKI system. Browsers such as Microsoft Internet Explorer, Netscape/ Mozilla and Opera come with root certificates pre-installed, so SSL certificates from larger vendors who have paid for the privilege of being pre-installed will work instantly; in essence the browser's programmers determine which CAs are trusted third parties. Whilst their root certificates can be disabled, users rarely do it.

X.509 also includes standards for Certificate Revocation List implementations, an often overlooked necessity in PKI systems. The X.509 standard defines what information can go into a certificate, and describes how to write it down (the data format).

79 All X.509 (up to version 3) certificates have the following data, in addition to the signature:

Version This identifies which version of the X.509 standard applies to this certificate, which affects what information can be specified in it. Thus far, three versions are defined.

Serial Number The entity that created the certificate is responsible for assigning it a serial number to distinguish it from other certificates it issues. This information is used in numerous ways, for example when a certificate is revoked its serial number is placed in a Certificate Revocation List (CRL).

Signature Algorithm Identifier This identifies the algorithm used by the CA to sign the certificate.

Issuer Name The X.500 name of the entity that signed the certificate. This is normally a CA. Using this certificate implies trusting the entity that signed this certificate. (Note that in some cases, such as root or top-level CA certificates, the issuer signs its own certificate.)

Validity Period Each certificate is valid only for a limited amount of time. This period is described by a start date and time and an end date and time, and can be as short as a few seconds or almost as long as a century. The validity period chosen depends on a number of factors, such as the strength of the private key used to sign the certificate or the amount one is willing to pay for a certificate. This is the expected period that entities can rely on the public value, if the associated private key has not been compromised.

Subject Name The name of the entity whose public key the certificate identifies. This name uses the X.500 standard, so it is intended to be unique across the Internet. This is the Distinguished Name (DN) of the entity, for example, CN=Joe Bloggs, OU= Research, O=SAP, C=France. These refer to the subject's Common Name, Organisational Unit, Organisation, and Country.

Subject Public Key Information This is the public key of the entity being named, together with an algorithm identifier which specifies which public key crypto system this key belongs to and any associated key parameters.

7.1.2 Evolution of the X509 standard

X.509 Version 1 It has been available since 1988, is widely deployed, and is the most generic.

X.509 Version 2 The second version introduced the concept of subject and issuer unique identifiers to handle the possibility of reuse of subject and/or issuer names over time. Most certificate profile documents strongly recommend that names not be reused, and that certificates should not make use of unique identifiers. Version 2 certificates are not widely used.

X.509 Version 3 The most recent version (1996) supports the notion of extensions, whereby anyone can define an extension and include it in the certificate. Some common extensions in use today are: KeyUsage (limits the use of the keys to particular purposes such as "signing- only") and AlternativeNames (allows other identities to also be associated with this public key, e.g. DNS names, e-mail addresses, IP addresses). Extensions can be marked critical to indicate that the extension should be checked and enforced/used. For example, if a certificate has the KeyUsage extension marked critical and set to "keyCertSign" then if this certificate is presented during SSL communication, it should be rejected, as the certificate extension

80 indicates that the associated private key should only be used for signing certificates and not for SSL use.

All the data in a certificate is encoded using two related standards called ASN.1/DER. Abstract Syntax Notation 1 describes data. The Definite Encoding Rules describe a single way to store and transfer that data. People have been known to describe this combination simultaneously as "powerful and flexible" and as "cryptic and awkward". The IETF PKIX Working Group is in the process of defining standards for the Internet Public Key Infrastructure.

7.1.3 PrimeLife Impact

As X.509v3 is traditionally used to structure public key certificates, it seems natural to offer X.509v3 versions of issuer keys, used in anonymous credential systems. These keys are needed for the user of an anonymous credential to check the validity of the private certificate, generated in interaction with the issuer. A party, relying on the result of a transaction with this user, will also need these issuer keys to check the correctness of the proof. The effort to represent issuer keys in X509v3 will be possible without massive impact on the X.509 standard, as it would only require to look for a way to represent non-standard key material in the structure. The signature in the X.509 certificate could be a classical one, for example, RSA. Handling X.509v3 certificates is implemented in all major browsers and operating systems but it remains to be investigated how these would react in an unknown key format.

Taking it one step further, leveraging existing X.509v3 handling implementations, it is also possible to represent the private certificate of the user in X.509v3 format. However, this would introduce additional complexity:

• the issuer verification key should be present in the certificate, so an unsupported key format is needed (same as above) • the process of generating the signature and the representation of the signature value in the X.509v3 certificate is regulated by CMS (Cryptographic Message Syntax, PKCS#7). Solving the problem of the signature value representation, and the signature algorithm identifier, might be possible, but it might be necessary to change the PKCS#7 processing rules to be able to produce a usable X.509v3 representation of a private certificate. Those changes would violate the standard, which would force us to write our own PKCS#7 software, or to massively change an existing open source implementation. In this case, standard PKCS#7 implementations wouldn't be able to parse our changed format, even if they would allow for plug-in implementations of new signature algorithms • Building a private certificate is the result of an interaction between issuer and user, and therefore, the issuer is not generating the resulting X.509v3 certificate. Apart from the fact that this might seem odd for those used to X.500, it is necessary to investigate how certain data fields have to be completed (one of those examples is Subject DN)

A third opportunity is to represent the result of a credential show in CMS/PKCS#7 format or as a X.509v3 attribute certificate. In the CMS case, this would be done in a similar fashion as mentioned in "Assertion-Based Signatures" (XMLDSIG (see Chapter 7)). Most of the problems originate from the fact that the signature algorithm employed does not have a classical structure. As the X.509v3 case is very similar with regards to limitations, it would require a way to represent the proved assertions in the X.509 attribute data model.

81 7.2 PKIX

The PKIX is a Working Group established in the autumn of 1995 with the intent of developing Internet standards needed to support an X.509-based PKI. The scope of PKIX work has expanded beyond this initial goal. PKIX not only profiles ITU PKI standards, but also develops new standards apropos to the use of X.509-based PKIs in the Internet. PKIX has produced several informational and standards track documents in support of the original and revised scope of the WG.

The first of these standards, RFC 2459, profiled X.509 version 3 certificates and version 2 CRLs for use in the Internet. Profiles for the use of Attribute Certificates (RFC 3281), LDAP v2 for certificate and CRL storage (RFC 2587), the Internet X.509 Public Key Infrastructure Qualified Certificates Profile (RFC 3039/3739), and the Internet X.509 Public Key Infrastructure Certificate Policy and certification Practices Framework (RFC 2527/3647 - Informational) are in line with the initial scope.

The Certificate Management Protocol (CMP) (RFC 2510), the Online Certificate Status Protocol (OCSP) (RFC 2560), Certificate Management Request Format (CRMF) (RFC 2511), Time-Stamp Protocol (RFC 3161), Certificate Management Messages over CMS (RFC 2797), Internet X.509 Public Key Infrastructure Time Stamp Protocols (RFC 3161), and the use of FTP and HTTP for transport of PKI operations (RFC 2585) are representative of the expanded scope of PKIX, as these are new protocols developed in the Working Group, not profiles of ITU PKI standards.

7.2.1 X.509 Attribute Certificate and Privilege Management Infrastructure

X.509v3 2000 recommendation includes an architecture for Privilege Management based on a form of Attribute Certificate. The attribute certificate framework defined here provides a foundation upon which Privilege Management Infrastructures (PMI) can be built. These infrastructures can support applications such as access control.

The binding of a privilege to an entity is provided by an authority through a digitally signed data structure called an attribute certificate or through a public-key certificate containing an extension defined explicitly for this purpose. The format of attribute certificates is defined here, including an extensibility mechanism and a set of specific certificate extensions.

Revocation of attribute certificates may or may not be needed. For example, in some environments, the attribute certificate validity periods may be very short (e.g. minutes), negating the need for a revocation scheme. If, for any reason, an authority revokes a previously issued attribute certificate, users need to be able to learn that revocation has occurred so they do not use an untrustworthy certificate.

Revocation lists are one scheme that can be used to notify users of revocations. The format of revocation lists is also defined in this specification, including an extensibility mechanism and a set of revocation list extensions. Additional extensions are defined here. In both the certificate and revocation list case, other bodies may also define additional extensions that are useful to their specific environments.

A system using an attribute certificate needs to validate it prior to using that certificate for an application. Procedures for performing that validation are also defined here, including

82 verifying the integrity of the certificate itself, its revocation status, and its validity with respect to the intended use.

This framework includes a number of optional elements that are appropriate only in some environments. Although the models are defined as complete, this framework can be used in environments where not all components of the defined models are used. For example there are environments where revocation of attribute certificates is not required. Privilege delegation and the use of roles are also aspects of this framework that are not universally applicable. However these are included in this specification so that those environments that do have requirements for them can also be supported. The directory uses attribute certificates to provide rule-based access control to Directory information. 7.3 XML Signature

7.3.1 Specification Overview

The XML Signature specification [XML Sig] describes a format that can be used to encapsulate cryptographic signatures of arbitrary content in XML, and to sign XML documents, or subsets of such documents. The specification is designed to be highly extensible: Signed information can be subject to arbitrary transformations before signing. Algorithms - both cryptographic and otherwise - are identified by way of URIs and therefore exchangeable.

A typical signature consists of a SignedInfo element that describes the data that were signed, and the steps applied to it in order to generate the signature. Specifically, SignedInfo identifies the canonicalisation method that is applied to transform the element itself into an octet stream, the signature method, and any number of Reference elements. Each of these elements identifies a resource (or document subset), a chain of transformations, a digest method, and a digest value. To compute the final signature, SignedInfo is canonicalised, hashed and signed according to the given signature algorithm. Information about the signing key can be part of a KeyInfo element.

The processing model for XML Signature requires that each Reference element be dereferenced, transformed, and hashed. The validity of an XML Signature therefore implies that the referenced resource (as transformed) was unchanged.

To enable different models and broader semantics, XML Signature supports the Object element as an extensibility point which may, in particular, hold a Manifest which is by itself a collection of Reference elements. Processing for Reference elements within a Manifest differs from processing for those which are direct children of SignedInfo: They need not be dereferenced; the precise meaning of a signature that involves a Manifest will depend on the application context. For example, a manifest might be used to hold digest values of multiple representations of the same resource; in this case, it might be enough for one of several digest values to match.

To hold further signature semantics, the optional SignatureProperty element may occur within an Object element; this element is specifically designed to hold additional information about the creation of its target signature. It can, e.g., be used to hold information about time stamps or signature generation hardware.

83 7.3.2 Status

XML Signature is a broadly implemented specification, and used as a basic signing primitive for - among others - SAML and WS-Security. The XADES [XADES] specification for advanced electronic signatures profiles and extends XML Signature.

The original XML Signature specification was jointly developed by W3C and the IETF, and first released as a W3C Recommendation in 2002. At the time of writing of this document, W3C is reviewing a lightly changed version for publication as a Second Edition of the XML Signature Recommendation. Republication of this document within the IETF framework is expected.

The W3C Workshop on Next Steps for XML Signature and XML Encryption [XML Sig/Enc Workshop] in September 2007 led to a proposal for new work on XML Signature.

Chartered in early 2008, the [http//www.w3.org/2008/xmlsec/ XML Security Working Group] pursues two main avenues:

• A set of incremental changes to the existing XML Signature and XML Encryption specifications, to address support for additional cryptographic algorithms. In February 2010, a Last Call Working Draft for XML Signature 1.1 was published. A Last Call Working Draft for XML Encryption 1.1 is anticipated during the first half of 2010. The changes made in these versions of the specification are incremental and aimed at near to mid term deployment. • A more radical revision to the specification's transform and canonicalization model, to address questions such as attack surface, streaming processors, and performance of implementations. While First Public Working Drafts and some design notes have been published, this work is proceeding on a slower time scale.

7.3.3 PrimeLife Impact

In PrimeLife we are interested in capturing the privacy enhancing aspects of signature related primitives. Such primitives include group signatures, but also e-cash and ring signatures. However, our main interest in PrimeLife is in anonymous credentials that have a wide variety of features that allow them to emulate other privacy enhancing mechanisms such as group signatures and e-cash. They are also a major building block in realising privacy preserving attribute based access control. While our discussion could be kept more general we focus on anonymous credentials as they cover most of the cases.

There are several ways in which signatures and anonymous credentials are related. Those relations are elaborated on in the following paragraphs.

XMLDSIG as Format for Private Certificates

The show of a credential is a proof of knowledge of a signature (this signature is sometimes called the private certificate). The signature needs to be obtained in an interactive protocol, but it could be stored locally in XMLDSIG format.

As only knowledge of the signature is proven there is not much need to exchange these signatures directly with a verifying entity. So this would be mostly an archiving and data exchange standard for synchronisation between different devices of the same user.

84 XMLDSIG as Format for Anonymous Credential-based Signatures

Proofs of knowledge of some secret material (in our case a signature on the users secret key and different attributes) can be turned into non-interactive proofs using the Fiat-Shamir heuristic. Many known signature schemes follow this approach, e.g. Schnorr signatures, group signatures, many ring signatures.

Some of these Fiat-Shamir heuristic based signatures are conventional signatures that fit into the current XMLDSIG framework (i.e., all signature scheme where the messages do not need to be input in the clear but rather as a hash). Others such as assertion-based signatures, that prove assertions about the attributes of credentials, such as anonymous credentials, group signatures, and ring signatures, do not that easily fit into the framework (as they require the message in the clear) and this would deserve further evaluation.

XMLDSIG as Format for Credential Shows

The proof of knowledge of a signature on certain attributes can be seen as 'passing on' the original signature on these attributes to a verifier. This passing on has certain desirable privacy preserving properties, such as: unlinkable to issued signature, selective disclosure of attributes, conditional showing of attributes (basically all those features that anonymous credentials provide). In this interpretation no new message is signed by the user. The user only transforms the signature obtained by the credential issuer into a new format that reveals less information about data that was already signed hiding for instance his identity.

This again could be expressed in XMLDSIG and deserves further consideration. A critical property of XML Signature is the fact that signature algorithms proper are applied to an XML description of the signed material; this description includes a digest value of the signed data. To apply advanced signing algorithms (i.e. to selectively reveal attributes), it is often desirable to be able to directly interact with the signed material.

Such direct interactions with the signed material include showing only parts of the signed material, or predicates or functions computed from signed values. This partial showing of signed material involves the owner of the signature as an active entity. As such the owner can choose what to reveal about the signed material and what to keep secret. However, he can only generalise the data but not change the assertions made by the issuing organisation. 7.4 OAuth

The OAuth protocol [OAuth Core 1.0] is built on top of HTTP. It enables authority delegation on the Web: A user can authorise a third party (the Consumer) to access a Protected Resource (controlled by the Service Provider) in some useful way. Authorisation is revocable. The protocol design assumes a prior out-of-band agreement between consumer and service provider about data usage and certain protocol aspects.

An example scenario is authorising a location information hub to access a social travel site's information about the user's location: The location hub learns (from the user, or through a referral) about the URL of the user's travel profile. The user is redirected to the travel profile site, where he can determine what information (if any) is given to the location hub, and how long that authorisation should last. The authorisation information is passed back to the location hub, which can now process location information further. The user can revoke the authorisation without collaboration from the location hub's side.

85 Another example scenario is authorising an application (on the Web or locally) to fetch photos from a social photo sharing site.

7.4.1 Protocol flow

Traditionally, OAuth assumes that a consumer and a service provider have an advance agreement that manifests itself in the following items:

• an (opaque) consumer identifier known to both consumer and service provider; • establishment of a signing mechanism; • key material for that mechanism.

Requests from the consumer and the service provider are signed, and must be verified by the service provider.

The basic authentication flow consists of three steps: First, the consumer obtains an unauthorised request token from the service provider. In requesting this token through HTTP, the consumer can provide additional parameters to the service provider, which might influence the scope of the authorisation. The protocol provides a framework for passing these parameters, but does not define a vocabulary for them, as this aspect is deemed application- specific. The service provider will return the token and a corresponding shared secret (plus, possibly, service-specific additional parameters) to the consumer. The initial request message is signed with the prearranged key; it includes the prearranged consumer identity, a possibly opaque string.

In the second step, the user authorises the request in an interaction with the service provider. This authorisation request is linked to the protocol's remaining steps through the request token, which may even be entered manually, enabling the user-facing parts of the protocol to be executed through referral methods other than HTTP redirects. For example, the authorisation step could be performed through text message on a mobile phone.

During the authorisation step, the service provider will authenticate the user and be informed about the consumer identity and other pertinent information. When the user decision is communicated to the service provider through HTTP (and a callback URI was passed to the service provider beforehand), then the service provider will redirect the user browser back to the consumer; again, the request token is used to link the various requests. Otherwise, some other referral method might be used -- from contextual knowledge on the user side (the consumer displaying an appropriate message) to URIs in text messages.

The consumer now performs the third step of the protocol: exchanging the request token for an access token. To this end, the consumer sends a specific signed request to the service provider (through a direct HTTP request), which will (if the user had indeed authorised the consumer) respond with an access token and corresponding secret. The request includes the request token that had previously been used in interactions between the service provider, the consumer and the user; and the consumer opaque identifier. The service provider answers with the access token, a shared secret corresponding to that token, and any additional parameters.

The access token and corresponding secret are then used to generate authorisation tokens on subsequent requests for the protected resource from the consumer to the service provider (e.g., accessing the user location information from the travel site, or accessing a private photo for purposes of printing). The authorisation information can be passed through the HTTP

86 authorisation header, as a POST parameter or as a query parameter on a GET request. The protocol does not impose any limitations on reuse of the access token; the life cycle of that token is determined by the service provider.

The protocol is implemented through simple HTTP-based transactions (including referrals around the user authorisation decision); to achieve confidentiality of transactions, TLS is used. Confidentiality of network transactions is particularly significant for interactions between consumers and service providers that establish tokens and corresponding secrets.

The authorisation tokens sent when the consumer accesses the protected resource are one- time tokens. They are, however, subject to an offline brute-force attack to recover the corresponding secrets.

The OAUTH WRAP [OAuth WRAP] profile proposes a simplified version of the protocol exchange that removes the signatures and similar mechanisms from the OAuth protocol. Instead, the security properties are exclusively derived from the use of HTTPS for all exchanges.

7.4.2 Trust and privacy properties

The semantics of authorisations are out of scope for the protocol's definition: they are a matter of (out-of-band) negotiation between consumer and service provider, which are expected to share API definitions, and of human-readable interactions between the service provider and the user. Extension points in the protocol can be used to pass these semantics between the parties.

The user trust decision is made in a transaction with the service provider. The service provider is the source of the human-readable information about the scope of the authorisation. Information about the consumer identity is derived from a prior out-of-band interaction between the service provider and the consumer, so that the protocol only bears an opaque identifier (though implementations might choose a human-readable identifier) bound to a signature verification key known to the service provider.

Authorisation tokens can be revoked through an interaction between the user and the service provider. The consumer does not require knowledge of the user credentials with the service provider, and does not need to know the user identity with the service provider. In addition, the protected resource URI might be anonymous.

The OAuth WRAP proposal changes the security properties of the protocol significantly: While the confidentiality of the tokens that are transmitted is protected in transit using TLS, these tokens fundamentally behave like password-equivalents.

7.4.3 Specification development

OAuth [OAuth.NET] was developed by an ad-hoc group of contributors. It is now the subject of the IETF oauth Working Group [IETF Oauth WG], which is chartered to bring the technology to the IETF standards track.

The WRAP proposal has been submitted to the IETF Working Group, and is now under consideration.

87 7.4.4 Open Source Implementations

A list of implementations [OAuth Code] is maintained by the OAuth project. The protocol is widely deployed.

88 Chapter 8 User Control Infrastructure

8.1 Smart Cards: User-controlled Token for Privacy Protection

In a single sentence a smart card can be defined as

a personalised physical token under control of the user that can be used to perform cryptographic computations and keep data confidential.

In a more elaborate description, the important aspects of a smart card and its common use are:

• it is associated with a unique human user, the cardholder; • the cardholder has physical possession of the card; • some commercial entity, the card issuer, has prepared the card for use and delivered it to the cardholder; • some commercial entity, the application issuer, has initialised the card with personalised data, e.g. account number; • the application issuer defines the services that a card can deliver to the card holder and configures the card accordingly; • a cardholder controls the use of the card and its functions; • use of the card is localised to a terminal that has the facility to render the services the card has been configured for.

In many practical cases of deployed smart cards the application issuer and the card issuer are the same. A multi-application card, where multiple independent application issuers maintain separate "card applications" on a shared card is technically possible, yet has not seen any real deployments.

The lack of a user interface on the device has been a limitation in its use. Integration in another personal device such as mobile phone handset can overcome this limitation.

PrimeLife has realized a generator of anonymous credentials (see Chapter 8) as an application on a Java Card smart card.

89 Deployments

Smart cards have been widely deployed, over 5 billion in 2009 (published by [Eurosmart]) and the deployments are still increasing. By far the largest deployment is in mobile telephones where the card is used as the subscriber identity module (SIM). Banking cards are being migrated from pure magnetic strip cards to smart cards with magnetic strip as backup. Citizen ID cards and healthcare insurance cards have been issued or will be issued in many countries.

Trusted device

Classically, the Smart Card is regarded as the Trusted Device in Mobile Devices. A smart card is operated as a trusted device in the application managed by the application issuer to deliver security functions. Herein, the trust is based on the tamper-resistance of the smart card chip, on the reliability of the card operating software and on the cryptographically maintained chain of trust between the card manufacturer (with any pre-issuance card processing agents), the card issuer and the application issuer. As a basis for trust many smart card products have undergone security evaluation according to the Common Criteria [CC]. And the card industry has produced a number of protection profiles to guide these security evaluations. Increasingly, other Trusted Devices appear in the context of Mobile Devices. Secure Elements (SEs) such as SD cards (in the various form factors of e.g. Micro SDs), embedded SEs, Stickers and Trusted Mobile Base technologies (leveraging the processor of the mobile devices to embed security) are entering the market. The latter security solutions, e.g. in the area of Trusted Mobile Base, use technology concepts such as virtualization, ARM Trust Zone, Intel TXT and others. In privacy-enhancing and identity management-enabled applications, the sum of these SEs – from the classical Smart Card in the form of a SIM / UICC through to the concept of the Trusted Mobile Base can all play important roles for the different stakeholders in the Mobile Services Value Chain.

Cardholder and user control

In the case of classical Smart Cards, the cardholder has control over the card, for example:

• by inserting the card in a reader, or for contactless cards by holding the card in front of it; • by entering a PIN or password; • by providing a biometric sample.

In a use case, the card holder would, for example, indicate his / her intent to use the card with the first two control factors. Further, a biometric sample (with or without match on the Smart Card) could provide proof of the presence of the card holder at the location and at a certain point in time. Applications in the card can be configured for the type of user control required for the functions it provides:

• PIN or biometric entry a single time when the card service session starts; • PIN or biometric entry each time a particular service function is requested; • different PINs or passwords may be configured for different functions; • different biometric aspects may be configured for different functions; • any combination of above.

In practice, a Smart Card has a single PIN and any application on the card that needs a PIN uses that single one. Similarly, if the card supports biometric on-card verification, any

90 application on the card may use it. Since most deployed Smart Cards support a single application these convenience cardholder-control configurations are usually sufficient. Privacy aspects of biometric control methods are further investigated in the next section of this chapter.

The upcoming Secure Elements that are expected to be of relevance for privacy-enhanced services and identity management applications allow similar ways of user control. Here, user control can, for example, be executed through interaction via a Secure User Interface (such as the keypad of a mobile device) and leverage concepts such as PINs and passwords, as with classical Smart Cards. Also, secure User Interfaces could be linked to biometric sensors in mobile devices to allow for matching processes on the SEs. Secure User Interfaces can be provided via different technologies: Smart Card Web Server technologies, Internet Smart Cards on the one hand and isolated security spaces in concepts such as Trusted Mobile Base. The latter does even offer the opportunities to provide drivers for Secure Display and Secure Keypad solutions.

Form Factor

In a formal view, based on the international standard ISO/IEC 7810 a smart card can be in the form of the conventional credit card, or alternatively, in the smaller card used for SIMs. The mobile phone, with an inserted SIM card, can be seen as providing an alternative form factor for the user controlled trusted functions. (The one-sentence definition above could be easily applied to a mobile phone). For interactions with the Web, and Web 2.0 service architectures targeted in PrimeLife, the mobile phone will be a convenient platform for user control as it has internet connectivity and provides a user interface.

The NFC [NFC] technology takes this extended form factor and actually transforms the mobile phone into a contactless card. Since there now is a user interface snooping abuse inherent to contactless cards can be reduced, e.g. by an on/off input.

8.1.1 Standardisation

International standardisation for smart card technology is done in ISO/IEC [ISO/IEC WG4], in CEN TC 244 and in ETSI/3GPP [ETSI/3GPP]. Standardisation is concentrated in three areas:

• physical communication and application infrastructure; • cryptographic algorithms; • applications.

Work on communication and application infrastructure is done in SC17/WG4 (see Chapter 4) (cards with contacts) and SC17/WG8 (contactless cards). The main standard documents are:

• ISO/IEC 7816-3, specifying the basic command-response interaction with a card and low-level details for operating and communicating over contacts; • ISO/IEC 7816-4, specifying a structure for data stored on a card and commands to access stored data; • ISO/IEC 7816-8, specifying commands to perform cryptographic operations; • ISO/IEC 14443-*, specifying operating and of communicating with an RF field; • ISO/IEC 24727-*, specifying an API for interacting with a card as a device that carries user identification data.

91 Work on Cryptographic algorithms is done in TC224/WG16. Work on applications for cards is done in ETSI/3GPP (SIM, USIM, SIM toolkit), SC17/WG11 (drivers license), TC224/ WG15 (citizen ID card) and by industry groups EMV, for payment cards and Global Platform [Global Platform] for application management.

8.1.2 Architectures

At a technical level the card can be modeled in three alternative and complementary views:

• as a storage device with an hierarchically organised data structure enhanced with access control and transport security; • as a cryptographic co-processor with tamper resistant key store; • as a general-purpose, programmable computer.

Privacy protection and PET in general are not explicitly addressed in most activities in current standardisation. Where addressed the concern is the protection of privacy sensitive data in transfer from card to host. Specification of privacy protection rules and the interpretation of privacy security attributes is left to the application outside the card.

The standards do not specify access conditions for reading of unique identifiers in the card, like a chip serial number, an application serial number or a public key certificate. In the ISE/ IEC 14443 series. However, a fixed chip identifier initially broadcast to establish a connection has been replaced by a random one.

8.1.3 Anonymous Credentials on Java Cards

Secure identity tokens such as Electronic Identity (eID) cards, eTickets, and access badges are emerging everywhere. As they are being implemented today, they do not protect the privacy of the user. Indeed, they typically rather harm the users´ privacy as using one token several times can be linked. In addition, each use reveals personal information probably exceeding what is actually need for the application at hand. Using anonymous credential schemes to issue the certificates stored and used on these tokens, privacy can be regained. However, on inexpensive hardware platforms that is typically used for eID cards, it is challenging to implement such anonymous credential schemes due to the mathematical computation required to this end. We have investigated the restricted computational possibilities of a Java Card 2.2.1 and how they can be employed to nevertheless perform the necessary mathematical operations. We achieve transaction times that are orders of magnitudes faster than those of any prior attempt and at the same time we have risen the bar in terms of key length and trust model compared BCGS09 [BCGS09]. We have thus demonstrated that smart cards can indeed be employed to built privacy protecting electronic authentication systems such as eID cards.

8.1.4 Strategy and Actions

PrimeLife has completed the research aspects of smart cards with the realisation of the credential generator and completion of an analysis of the privacy aspects of a web server on a smart card. PrimeLife considers including a secure MicroSD that has an integrated smart card in the mobile phone handset demo that is being prepared.

Two areas of activities in PrimeLife may interact with standardisation for smart cards:

• development of algorithms for user control,

92 • development of user interfaces for user control.

Possible actions:

• participation in SC27/WG5 to additionally address the adaptation and design of protocols with a role for the smart card in giving the user control in protecting her privacy; • participation in SC17/WG4 (see Chapter 4) to introduce PET protocols and algorithms in the areas of: ◦ The presentation of the communication interface of the credential generator (see Chapter 8) for standardization ◦ protecting user data exchanged with the card for PIN or biometric identification ◦ privacy protecting enhancements for the requests and procedures for entity authentication and card management ◦ restriction on access to unique card identifying data • identify protocols that are suitable for deployment on cards or require support by smart card functions and organise timely technical input to the work on standards from other work packages; 8.2 Biometrics Standardisation and Privacy

Biometrics is the science of measuring physical or behavioural characteristics that are suitable for the identification of individuals. There is a variety of applications using biometric user authentication today including national ID cards, digital signatures and physical or logical access control. A biometric trait, such as a fingerprint, iris, dynamic signature or face is stored in the system and typically, a claimant presents his or her biometric characteristic again to be compared with the previously recorded reference data. From a cryptographic point of view, biometric data must be regarded public, not secret. It is true, that biometric credentials are unique identifiers, not secrets. From a privacy viewpoint, however, biometric data must be considered confidential. Biometric data is critical in terms of privacy. It is not only related information, like a bank account number or street address but it is tied to the individual usually for the whole life. Biometric information can contain unwanted additional personally identifiable information like gender, age, susceptibility to some diseases or even sexual preferences.

The open mass applications applying biometrics led to a need for standardisation of biometrics. Industry, government and academic delegates attend the standardisation consortiums. This document briefly describes the standardisation landscape for biometrics and points to aspects that are relevant in terms of privacy.

8.2.1 Biometrics Standardisation

Standardisation of biometrics started with national groups strongly led by NIST (National Institute for Standards and Technology) from the United States. Several industries are also dealing with biometrics standardisation, e.g. ICAO (international civil aviation organisation). The most important standardisation activity today is ISO/IEC JTC1 SC37 biometrics. It has established liaisons to all major standardisation bodies and is organised into 6 Working Groups:

• WG1: Harmonised Biometric Vocabulary and Definitions

93 • WG2: Biometric Technical Interfaces • WG3: Biometric Data Interchange Format • WG4: Profiles for Biometric Applications • WG5: Biometric Testing and Reporting • WG6: Cross-Jurisdictional and Societal Aspects

Another important group is ISO/IEC JTC1 SC17 WG11 applied biometrics on cards. It deals with on-card comparison of biometrics as will be explained below.

8.2.2 Architectures

A biometric system operates in two phases: enrolment and verification/identification. A biometric reference data set is recorded in the enrolment phase and stored in a database or portable data carrier. The actual application assumes that reference data was previously recorded. In a physical access control system, users would present biometric traits, e.g. their faces to the system to be compared with the reference data. If the two samples are considered similar, access is granted. From a privacy point of view, it is of utmost importance where the storage and comparison actually take place. In a networked world where one would not trust a communication partner or the operating system on a host computer, one can still trust the smart card carried in a wallet. It enhances privacy when an application using a global database is reorganised to store all biometric data on a portable data carrier. The same is true for changing from an online verification, where data has to be transmitted to offline verification in a tamper-proof embedded system. A special case of such an embedded system is a smart card and the technology to perform a biometric comparison within a smart card is named Match-on-Card as addressed in SC17 WG11.

8.2.3 Strategy and Actions

To enhance privacy in today and future applications, standardisation activities should be observed and influenced where necessary. Depending on the particular application, databases and insecure data management should be avoided. Protecting biometric data by cryptographic means and storage as well as comparison in portable data carriers should be promoted.

The most important groups to address are the following:

• SC37 WG2: care needs to be taken, that the most common interfaces respect the needs for privacy and do not exclude Privacy-Enhancing Technologies. • SC37 WG3: the data formats should also carefully be inspected and comments be made to ease the use of offline storage and verification. • SC37 WG4: today there is only little application profiles and the existing profiles are not well designed to protect the privacy of individuals. Further profiles should be encouraged to change this dramatically. • SC17 WG11: the standards produced by this group deal exactly with privacy- enhancing technology. They should be supported to reach a stable document status and promoted in other liaisons. • Other activities such as testing should be observed in a passive role.

94 Chapter 9 Identity Systems

In this section we provide an overview over different identity system as well as an insight into identity federation protocols. The reason behind the presentation of those topics in one section is that there are specifications and their implementations released under the identical name (e.g. OpenID). 9.1 OpenID

Main OpenID specifications:

• OpenID Attribute Exchange 1.0 [OpenID 1.0] • OpenID Authentication 2.0 [OpenID 2.0]

Additional information is avaiable from the OpenID foundation [OpenID Foundation].

9.1.1 Background

The OpenID protocol traces some of its roots back to Web blog commenting use cases: The fundamental idea is that, instead of asking a commenter to coin a user name / password pair, the commenters should identify themselves by providing the URI of their own blog. The protocol is based on simplistic data formats, and layered on top of HTTP. Current work on OpenID 2.0

OpenID therefore only requires implementation effort from the relying party and OpenID provider sides. From the user's side, an ordinary is sufficient.

9.1.2 Protocol Flow

The OpenID protocol is executed between the relying party (RP), the OpenID provider (OP), and the user.

95 In a typical scenario, the user will visit the RP's Web site, where a login-like form will ask him to enter an OpenID identifier (a URI or XRI, in typical use cases). Upon submission, the RP will then use this identifier to discover the OP's URI. RP and OP perform an anonymous Diffie-Hellman key exchange (called association in OpenID parlance); the key that is negotiated is used later to authenticate further messages that are exchanged. The user's Web browser is then redirected to the OpenID provider, with an authentication request.

Then OpenID provider will at this step perform user authentication (which might be as simple as verifying the presence of the cookie, or as complex as the execution of another federation protocol), and possibly interact with the user to enable the user to authorise the overall OpenID transaction. The details of this step are out of scope for the core OpenID specifications. Instead of only using a URI, OpenID providers are also experimenting with the use of e-mail addresses to authorize the OpenID transaction, by transforming the e-mail address into a URI.

If user authentication (and authorisation of the transaction) is successful, the OpenID provider redirects the user's browser back to the relying party; any assertions that are passed back to the relying party carry a message authentication code which is keyed with the shared secret established during the association phase.

The OpenID protocol exchange will scale badly to use cases in which individual HTTP requests are to be authenticated; in typical deployments, the OpenID protocol will be executed one (or a few) time(s) while the user interacts with the relying party. Authentication state is then attached to the user's session.

9.1.3 Message Formats

The abstract syntax for OpenID messages consists of a collection of simple tag-value pairs. There is no provision for more deeply structured data.

The messages are transmitted through HTTP (or HTTPS); there are several concrete syntaxes depending on the environment in which the message is passed.

During the association phase, the relying party will submit the OpenID message as the body of an HTTP POST request, and receive the answer in a simple text-based format.

For messages that are passed through the user's browser (in particular the authentication request and response), messages are always encoded in HTTP requests -- either in the body of HTTP POST requests (i.e., through form submission), or in query parameters of an HTTP GET request.

Messages can be "signed" by using a message authentication code keyed with the shared secret established during the association phase.

The message format is extensible with additional tags; the authentication request and response can therefore be used to pass personal information about the user from the OpenID Provider to the relying party.

96 9.1.4 Trust and Privacy Properties

The OpenID protocol provides a framework for certain assertions about a user's association with a URI. It does not provide an independent cryptographic proof to the relying party that the user has indeed executed a certain protocol with the OpenID Provider.

The establishment of trust between the relying party and the OpenID provider is out of the protocol scope. In deployments, the requisite policies range from accepting identities from any OpenID provider, through OpenID provider blacklists, to approaches where few (or only one) previously known OpenID providers are trusted.

Privacy concerns focus on the ability of the user's OpenID provider to link user transactions with different relying parties. It should also be noted that OpenID can be used to pass along personal information; how the release of this information is authorised is up to the individual OpenID Provider's choice.

Criticisms of OpenID center around certain aspects of the protocol design, and on risks that a malicious relying party might be able to successfully impersonate the user's OpenID provider.

9.1.5 Specification Development

The OpenID specifications were developed through an informal collaboration of interested parties. Since then, the OpenID foundation has been formed as a framework for further specification development; the foundation also manages intellectual property rights around OpenID.

9.1.6 Open Source Implementations

Numerous open source implementations of OpenID are available in different languages such as C#, Java, Perl or PHP. A list of OpenID libraries [OpenID libraries] is hosted at the OpenID wiki. In addition to the current implementations of OpenID there has been an Apache project which has been integrated into OpenID.

• Heraldry (06/09/2007) [Heraldry] ◦ supports OpenID-Protocol and plans to support CardSpace-Protocol, ◦ merged into OpenID. ◦ License: Apache 2.0. • OpenID4Java (Sxip) [OpenID4Java] ◦ Sxip also provides the Sxipper [Sxipper] Firefox plugin which is explained in more detail in Applications (see Chapter 9). ◦ Language: Java. ◦ License: Apache 2.0. • Netmesh Infogrid [Infogrid] ◦ Netmesh is the founder of LID and co-founder of Yadis (see Chapter 9). ◦ Language: Java, PHP, Perl. ◦ License: Sleepycat-like open source/commercial license [Netmesh license].

A Firefox plugin provided by Versign might prove to be useful for users of OpenIDs. The so called Seatbelt [SeatBelt] plugin detects OpenID conformance of the RP. If the RP supports OpenID the user is prompted if he wants to use his OpenID to sign-in. The user might even choose between different OpenIDs that he controls by using the given toolbar button.

97 9.1.7 PrimeLife Perspective on OpenID

From a PrimeLife perspective, OpenID is a platform for decentralised identity federation and personal information transmittal that merits further investigation, in particular as its deployment seems currently successful. The relative simplicity of many core aspects of OpenID - while easing deployment - will also be a challenge, as it might render the integration of advanced privacy enhancing technologies in the context of this protocol more difficult, as they must be implemented by the OpenID provider. 9.2 Higgins

The Higgins open source identity management project [Higgins] is an open source Internet identity framework designed to integrate identity, profile, and social relationship information across multiple sites, applications, and devices. Higgins is not a protocol, it is software infrastructure to support a consistent user experience that works with all popular digital identity protocols, including WS-Federation (see Chapter 9), WS-Trust, SAML (see Chapter 9), LDAP, Microsoft CardSpace (see Chapter 9) and so on. The main contributing partners within the Higgins project are Parity Inc, Novell, IBM, and Serena with interested support from Computer Associates, Oracle, and Google.

The end-user paradigm of Higgins is based on the i-card metaphor: the user manipulates visual representations of an i-card which represent the user’s identity with identity granting institutions (identity providers). When accessing the provider of a Web service or a Web site (the relying party), the i-card GUI lets the user select an i-card and thus the identity provider and the personal information released to the relying party.

The Higgins user interfaces allow for the issuance of i-cards, their management, and their usage when accessing Web sites and services. The project currently supports various user interface implementations available on diverse platforms (Apple OS, Linux, Windows). These user interfaces are based on a pure Web-based architecture, as browser extensions or as a combination of browser extension/self-contained GUI application.

The data model of Higgins is based on the Higgins Global Graph (HGG) data model and the Higgins Identity Attribute Service (IdAS). The HGG distinguishes amongst: Identity contexts: the data space for digital identities such as a directory, a social network infrastructure etc. Entities, contained in contexts, represent the real-world abstractions, such as users, groups, organisations, etc. Entities have a set of attributes which can be simple or complex (e.g. composite values), such as given name, date of birth, nationality, etc.

Developers use a Java based framework that provides an interoperability and portability abstraction layer over existing “silos” of identity data. IdAS makes it possible to “mash-up” identity and social network data across highly heterogeneous data sources including directories, relational databases, and social networks. Support for OWL/RDF based ontologies is intended to allow for mapping between semantically equivalent identity attributes.

The overall, high-level, architectural schematic is shown in the figure below. The preliminary step (1) fashions an identity, represented as an i-card and stored in the user’s i-card store (commonly on disk or some other storage device). The identity provider (IP) relies on the IdAS to obtain the various identity attributes defining the user’s identity.

98 In order to access a Web service or a Web site (the relying party), the user contacts the site (for example via his or her browser) (step 2). The site redirects the request to the user and lets him or her select an appropriate i-card using the Higgins i-card GUI (also known as the identity selector service (ISS)). Once selected, the identity provider associated with the selected i-card is invoked in order to generate a secure token (implemented as a SAML assertion, for example) (step 3). This secure token is relayed to the relying party which verifies the presented access token and either grants or denies access (step 4) to the Web service or resource.

Figure 3: High-level Higgins architecture

Higgins is compatible with Microsoft CardSpace and allows to import CardSpace’s Information Cards and use these to access CardSpace compliant Web sites.

The Higgins project has released version 1.0 of the environment in March 2008. Further work is being investigated in the following areas: Policies: Higgins currently reuses the Microsoft CardSpace claims language to express secure token requirements. More sophisticated policy and claims syntax and semantics are of interest. Ontologies on attributes to allow the mapping between required policy claims and IdAS supplied identity attributes. Additional identity schemes: In addition to the support of Microsoft CardSpace, other identity schemes such as OpenID or IBM’s Identity Mixer technologies are considered for integration into the Higgins framework. IdAS Access Authorisation: a unified and generalised acess authorisation layer to the Higgins IdAS environment is discussed within the project.

99 9.3 Windows CardSpace

Windows CardSpace [CardSpace] is the identity selector provided by Microsoft. Hence, it is neither a standardisation effort nor open source, but a commercial product. Nevertheless we want to describe it here, because it completes the picture of commonly used identity systems. CardSpace is shipped with the .NET Framework (version 3.0 and later) and is thus part of Windows Vista and Windows 7. It provides four major features:

• support for any digital identity system • consistent user control of digital identity • replacement of password-based Web login • improved user confidence in the identity of remote applications.

CardSpace is built on top of the Web Services Protocol Stack. It uses WS-Security, WS-Trust, WS-MetadataExchange and WS-SecurityPolicy. This means that it can easily be integrated with other WS-* applications. In CardSpace a so called Information Card contains all claims which are associated with an identity of a user. If a Web site shall accept Information Cards for authentication, the developer needs to add an tag to the HTML code of the Web site. This tag declares, what claims the Web site needs for authentication. The developer has then to decrypt and evaluate the token that CardSpace sends to the Web site.

We typically rely on a number of different digital identity systems, each of which may use a different underlying technology. To think about this diversity in a general way, it is useful to define three distinct roles:

• User is the entity that is associated with a digital identity • Identity provider is an entity that provides a digital identity for a user • Relying party is an application that in some way relies on a digital identity to authenticate a user, and then makes an authorisation decision

Given these three roles, it isn't difficult to understand how CardSpace can support any digital identity. A user might rely on an application that supports CardSpace, such as a Web browser, to access any of several relying parties. She might also be able to choose from a group of identity providers as the source of the digital identity she presents to those relying parties. Whatever choice she makes, the basic exchange among these parties comprises three steps:

First, the application gets the security token requirements of the relying party that the user wishes to access. This information is contained in the relying party's policy, and it includes things such as what security token formats the relying party will accept, and exactly what claims those tokens must contain. Once it received the details of the security token this relying party requires, the application passes this information to CardSpace, asking it to request a token from an appropriate identity provider. After this security token has been received, CardSpace gives it to the application, which passes it on to the relying party. The relying party can then use this token to authenticate the user or for some other purpose. Working with CardSpace does not require relying parties or identity providers to implement any proprietary protocols.

CardSpace implements an intuitive user interface for working with digital identities. Each digital identity is displayed as an Information Card. Each card represents a digital identity that the user can potentially present to a relying party. Along with the visual representation, each card also contains information about a particular digital identity. This information includes which identity provider to contact to acquire a security token for this identity, what kind of

100 tokens this identity provider can issue, and exactly what claims these tokens can contain. By selecting a particular card, the user is actually choosing to request a specific security token with a specific (sub-)set of claims created by a specific identity provider. In fact, the user needs not to disclose the full information that is associated with an Information Card, but she can verify what will be revealed to the relying party.

There are different interoperability initiatives which are implementing CardSpace on the relying party side. More specifically, the projects using CardSpace in one or another way are Bandit [Bandit], Concordia Project [Concordia], Higgins (see Chapter 9), and OSIS (see Chapter 9). 9.4 Information Card Foundation

Deutsche Telekom, Equifax, Google, Intel, Microsoft, Novell, Oracle and PayPal lead this initiative that builds around the idea of managing information cards in digital wallets [ICF]. The initiative focuses on reproducing experiences users have with cards such as credit cards, identity cards or membership cards in the digital domain. By doing so, they hope to make “everyday Web transactions [...] much easier, faster and safer” [ICF Overview]. Certainly, the consortium fosters interoperability among the individual identity selectors (e.g., Cardspace (see Chapter 9), Digital Me [DigitalMe], openinfocard [openinfocard]) developed by the partners. As such it makes use of specifications and code as developed by the Bandit project [Bandit].

The initiative tries to establish a identity meta system that allows users to click on the digital cards administered by the digital wallet instead of using username/password combinations for authentication. They hope that through this mechanism “businesses will enjoy lower fraud rates, higher affinity with customers, lower risk, and more timely information about their customers and business partners” [TechHerald]. Industry branding across the members sites should help building up awareness of the technology and for its advantages. 9.5 Open-Source Identity System (OSIS)

The Open Source Information Systems [OSIS at Identity Commons] working group was created to intensify the work on interoperability between the different identity management projects. The declared goal is to align the arising protocols, projects and companies such that overlaps can be avoided and the resulting infrastructure is interoperable. The main focus currently lies on the interoperability between InformationCards and OpenIDs which shows that proprietary protocols and projects are considered as well as open source initiatives. The open source projects taken into account are Bandit [Bandit], Heraldry (integrated into OpenID (see Chapter 8)), Higgins (see Chapter 9), OpenSSO, OpenXRI, Shibboleth and xmldap.

A list of OSIS Participants [OSIS Participants] is publicly available.

9.5.1 Specification development

OSIS is an opportunity for developers to gather data considering the interoperability of their code with respect to other standards or implementations. Consequently, there is no specification development apart from the specification of the features to test [OSIS Feature Test].

101 9.5.2 Open Source Interoperability Workshops

Different projects developed various features that where tested for interoperability. The results of those interoperability tests are used to increase the quality of the respective project. The most recent identity interoperability conference where the implementations were tested to discover the shortcomings of either the implementation or the respective standards was the 'I5 User-Centric Identity Interop through RSA 2009'. There was a preparative workshop hosted by OSIS and the Concordia Project [Concordia] 9.6 WS-Federation

WS-Federation (see WSFed Technical Committee [WSFED TC]) introduces mechanisms to manage and broker trust relationships in a heterogeneous and federated environment. This includes support for federated identities, attributes and pseudonyms. ‘Federation’ refers to the concept that two or more security domains agree to interact with each other, specifically letting users of the other security domain accessing services in the own security domain. For instance, two companies that have a collaboration agreement may decide that employees from the other company may invoke specific Web services. These scenarios with access across security boundaries are called ‘federated environments’ or ‘federations’. Each security domain has its own security token service(s), and each service inside these domains may have individual security policies. WS-Federation uses the WS-Security, WS-SecurityPolicy and WS-Trust specifications to specify scenarios to allow requesters from the one domain to obtain security tokens in the other domain, thus subsequently getting access to the services in the other domain.

To illustrate this concept with an example, imagine that a user Alice from company A intends to access Bob’s Web service in company B. Alice and Bob do not have any prior relationship, but both companies have agreed to federate certain services, and the decision is that particular users from company A may access dedicated services inside company B. By some means, Alice knows the endpoint reference of Bob's service. Using the basic mechanisms defined in WS-PolicyAttachment, WS-MetadataExchange, and WS-SecurityPolicy, Alice retrieves the security policy of Bob’s service and detects that the security token service STSB of company B issues tokens to access this service. Alice issues a security token request to the security token service STSA of company A and claims to need a token to access STSB. Company A and company B are federated together, therefore STSA is able to issue a security token for Alice. Of course, that may depend on whether Alice belongs to the group of A’s employees that are permitted to access Bob’s services. In the next step, Alice requests a token for accessing Bob’s service from STSB and proves her authorisation by utilising the token issued by STSA. After validating that STSA security token is valid, STSB issues a security token for access to Bob's service (assuming that Bob’s Web service belongs to the group that company B offers to company A). In the last step, Bob’s Web service is invoked by Alice. During that final request, Alice presents the token issued by STSB.

Besides this introductory example, WS-Federation shows how such a federation could work across multiple security domains or how delegation could be used. Delegation means that a user may delegate certain access rights on one federated resource to a different federated resource. WS-Federation adds security to service aggregation. Additionally, WS-Federation defines mechanisms to handle pseudonyms (aliases used at different services and federations) and management mechanisms for the pseudonyms, including single sign-in and sign-out (sign-out refers to the removal of pseudonym related information at different services).

102 WS-Federation is helpful for establishing trust relationships and is therefore essential for Web service work in PrimeLife. 9.7 SAML

The Security Assertion Markup Language (SAML) is an XML standard that defines a framework for exchanging security information, such as authorisation, authentication and attribute statements. It was developed by standards organisation OASIS (see Chapter 4) (the Organisation for the Advancement of Structured Information Standards).

9.7.1 Background

SAML 2.0 [SAML V2.0] comes from the combined effort of OASIS, the Liberty Alliance and the Shibboleth Project. These standards bodies enhanced SAML 1.0 to create SAML 2.0.

SAML 2.0 was ratified as an official OASIS industry standard (2005) and is now backed by multiple vendors and organisations around the world as industry standard for deploying and managing open identity-based applications. SAML 2.0 represents a step toward the convergence of identity standards and all future enhancements to Liberty Federation will be based on SAML 2.0.

The Liberty Alliance added support for SAML 2.0 to Liberty Web Services in 2005 and at that time incorporated SAML 2.0 testing into its Liberty Interoperable conformance programme. WS-Security also supports SAML.

9.7.2 Architecture

SAML is defined in terms of:

• Protocols: How to define a request or a response. • Assertion: How to define the information about authentication, attribute and authorisation. An assertion contains a package of information that supplies one or more statements made by a SAML authority. SAML defines three different kinds of assertion statements: authentication (a subject was authenticated by a specified method at a certain time), attribute (the subject is associated to the a set of attributes) and authorisation (a request to allow the subject to access the specified resource.) A typical SAML assertion reads • Bindings & Profiles: how SAML protocols are mapped onto transport layer (bindings) and how they are combined to support a specific use case (profiles). For example, the Web Browser Single Sign-On Profile describes how SAML authentication assertions are issued and communicated between an identity provider and service provider to enable single sign-on for a browser user.

103 9.7.3 Protocol Flow

As an example, let us consider the basic template for achieving SSO using SAML2, in particular the service provider (SP) initiated protocol.

In this scenario, a user visits for the first time a SP's Web site to access some resource. SP determines the location of an endpoint at an identity provider (IdP) for the authentication request protocol. The SP sends a SAML2 authorization request to the IdP (e.g., HTTP Redirect binding) through the user agent. The authorization request is read by the IdP The IdP firstly checks if the user is already logged in, if it is not, IdP asks the user to provide authentication materials (credentials), for example login/password. If credentials are valid, IdP sends a SAML2 Response message and sends it to the SP, e.g., using SAML2 HTTP post, through the user agent. This message may indicate an error, or include an authentication assertion. SP receives and checks the SAML2 Response message, and it may respond to the user agent with its own error, or establish a security context for the user and provides the requested resource.

9.7.4 Open Source Implementations

• OpenSAML Implementation from Internet2 [OpenSAML] ◦ Implements SAML 2.0 ◦ C++ and Java libraries ◦ License: Apache 2 License • Enterprise Sign On Engine (ESOE) as SSO solution [ESOE] ◦ Implements SAML 2.0 and Lightweight XACML (LXACML) ◦ Java libraries ◦ License: Apache 2 License • Liberty Alliance Single Sign-On (LASSO), Free Liberty Alliance Implementation [LASSO] ◦ Support of SAML 2.0 also ◦ C libraries ◦ License: GNU GPL and Commercial License for proprietary use • Lightbulb (subproject of OpenSSO) [Lightbulb] ◦ Implements SAML 2.0 ◦ PHP ◦ License: Common Development and Distribution License • Shibboleth from Internet2 [Shibboleth] ◦ Privacy extended SAML implementation ◦ C and Java libraries ◦ License: Apache Software License • ZXID [ZXID] ◦ Implements SAML 2.0 as C library ◦ C libraries (supports Perl, PhP, Java) ◦ License: Apache 2 License 9.8 Liberty Identity Federation

The Liberty Identity Federation Framework (Liberty ID-FF) is one of the three modules of Liberty architecture. It defines a set of protocols, bindings, and profiles for identity federation, cross-domain authentication, and session management.

104 9.8.1 History and relationship with SAML

Previous versions of the Liberty ID-FF (up to 1.2) were built on SAML 1.0/1.1 specification. More recently, the Liberty ID-FF (v1.2) was integrated into the SAML 2.0 specification. Additionally, SAML 2.0 has also components from the Shibboleth initiative.

Figure 4: ID-FF SAML convergence

9.8.2 Liberty profiles

The Liberty ID-FF describes various profiles. A Liberty profile is basically a combination of content specifications and transport mechanisms to support specific functions. The existing Liberty profiles, grouped according to their functions, are the following:

• Single Sign-On and Federation: The profiles by which a service provider obtains an authentication assertion from an identity provider facilitating single sign-on and identity federation. • Name Registration: The profiles by which service providers and identity providers specify the name identifier to be used when communicating with each other. • Federation Termination Notification: The profiles by which service providers and identity providers are notified of federation termination. • Single Logout: The profiles by which service providers and identity providers are notified of authenticated session termination. • Identity Provider Introduction: The profile by which a service provider discovers which identity providers an user agent may be using. • Name Identifier Mapping: The profile by which a service provider may obtain a Name Identifier with which to refer to a user agent at a SAML Authority. • Name Identifier Encryption: The profile by which one provider may encrypt a Name Identifier to permit it to pass through a third party without revealing the actual value until received by the intended provider.

A full description of these profiles can be found at Liberty ID-FF Profiles [ID-FF Profiles]

A particularly relevant use case is single sign-on, the corresponding protocol and a specific profile is described below.

9.8.3 Profiles of the Single Sign-On and Federation Protocol

The Single Sign-On and Federation Protocol defines messages exchanged between service providers and identity providers for supporting single sign-on. The mapping of these

105 messages to particular transfer (e.g., HTTP) and protocol flows are described in the the Single Sign-On and Federation profile, which lists various possible solutions. An example follows.

9.8.4 Single Sign-On Protocol Flow Example: Liberty Artifact Profile

User or user agent connects to a service provider (SP). To log in, he has to select the preferred identity provider (IdP), for example from a list presented on the service provider’s login page (alternatively, in some implementations, the SP itself may discover the preferred IdP by other means). Then, the user’s browser is redirected to the IdP, with an embedded parameter indicating the originating SP, and the user may log in to the IdP providing the necessary credentials.

The IdP verifies login credentials and, if successful, redirects the user agent to the originating SP with a one-time-used, encrypted credential, called an artifact, included in the URI. The artifact is a user handle, which can be used by the SP for querying the IdP to receive a full SAML assertion. The main advantage of this approach is that the artifact is small enough to fit the URI.

In the next step, SP uses the artifact information for querying the IdP about the user. In its response, the IdP provides the assertions for the user, and the SP may then establish a secure context and respond with the requested resource.

9.8.5 Liberty and CardSpace

CardSpace (see Chapter 9) is a Microsoft .Net component designed to manage digital identity. It is not a set of standards as Liberty, but a product. CardSpace is mainly built on WS-* standards, but also support some features from SAML.

There are some differences between the Liberty and CardSpace approaches. Liberty specifications are defined to be used with general Web browsers, on the contrary CardSpace introduces new functionality to user's Web browser and to the underlying operating system (it is currently shipped with Windows Vista and the .NET Framework 3.0 and later). In practice, CardSpace moves some of the identity management logic from the identity provider to the user's computer. This results in a loss of generality, but enables the use of proof-of-possession keys, which may increase the security of token delivery. Regarding authentication, CardSpace selector has a fixed list of authentication methods, whereas in Liberty framework identity providers can present a customised authentication page.

In short, although the two protocols try to solve the same general problem, CardSpace has a more client-centered approach, relying on a smart non-standard client, whereas Liberty provides a distributed and open, but more complex, framework.

9.8.6 PrimeLife and Liberty

Liberty standards are increasingly becoming adopted in many industries, and they play a major role in identity federation technologies. The PrimeLife consortium should evaluate them and compare them to other solutions (e.g., WS-Federation) for possible adoption.

9.8.7 Open Source Implementations

• FederID [FederID]

106 ◦ Integrates Authentic [Authentic], LASSO [LASSO], LemonLDAP::NG [LemonLDAP] and InterLDAP [InterLDAP] ◦ Languages: Java, Perl, Python ◦ License: AGPLv3, GPL • OpenLiberty-J [OpenLiberty-J] • LASSO [LASSO] ◦ Implements ID-WSF, ID-FF 1.2 ◦ C library ◦ License: GNU GPL • Conor Cahill ID-WSF tools [ID-WSF tools] ◦ Implements ID-WSF ◦ C client and Java server toolkit ◦ License: BSD license 9.9 Pamela Project

The Pamela Project [Pamela Project] is a community driving the implementations of relying parties for InformationCards. Currently, an InformationCard supporting relying party plugin for Wordpress, Joomla, MediaWiki, and Drupal is developed and released under the BSD license. The project founders believe that an important reason for the slow market adoption of user-centric identity technology (i.e., InformationCards) is the lack of available implementations of relying parties. The project tries to help overcoming this problem.

While providing open source relying parties, thus, lowering the initial costs for developers of social networking technology is an important fragment to incite market adoption, this project does not offer interfaces that could be leveraged by PrimeLife. 9.10 Yadis

Yadis [Yadis Spec] is an HTTP based protocol for authentication service discovery. It ensures compatibility between different authentication services. Currently three such services are supported: OpenID, LID and i-Names. The goal of the initiative is to make the used authentication system transparent to the user (i.e., a user sends a Yadis compatible identifier to the Yadis-enabled relying party (RP) which in turn is able to resolve the authentication system used by the Identity Provider).

9.10.1 Protocol flow

The Yadis authentication service discovery is initiated by the user supplying an identifier to the RP. This ID must be resolvable to a URI. The process of resolving the authentication service is executed in at most three steps. The resolution process results in the RP having a Yadis document which allows to use a therein specified authentication mechanism.

As the initial step the RP issues an HTTP request to the indicated URL. This request can either be a GET or a HEAD request. The answer to the request can be a Yadis document or a Yadis Resource Descriptor URI. In case an HTTP HEAD request was issued, the response may not contain a Resource Descriptor URI in which case the RP is obliged to issue an HTTP GET request.

107 If the RP did not receive the Yadis document after the first request, it must request the document at the location indicated by the Yadis Resource Descriptor URI. This results in the retrieval of the Yadis document. The second request may also consist of launching an HTTP GET request in the case of having issued an HTTP HEAD request in the first step and not received either Yadis document or Yadis Resource Descriptor URI. The response to this request is as described in the first step.

The third step is only necessary in case of an unsuccessful HTTP HEAD request as first message. As either the Yadis document or Yadis Resource Descriptor URI must have been retrieved in the second step, this third step consists of retrieval of the Yadis document at the location indicated by the Yadis Resource Descriptor URI. The resolution process terminates as soon as a Yadis document has been received.

9.10.2 The Yadis document

An example of a Yadis document specifying two available authentication services looks as follows

http://lid.netmesh.org/sso/2.0 http://lid.netmesh.org/sso/1.0

According to the specification there can be several Extensible Resource Descriptors (XRD) with the semantics of only the last being taken into account. Additionally, there can be several other elements, which may be disregarded by the RP. The XRD element may contain one or several Service entries. The absence of a Service element indicates that the Yadis URI is not meant to be used with a Yadis service. Otherwise, the Service element lists available services for authentication where the ordering of the elements is not relevant. The priority can be indicated using the optional priority attribute where smaller numbers refer to higher priorities and services with no priority refer to the lowest preference level. The Service element must contain one or more Type elements which are URIs or XRIs referring to the service specification document.

9.10.3 Trust and privacy properties

As the protocol only serves for service discovery, there are no privacy options in Yadis. The Yadis document, however, allows for elements other than the mandatory XRD element. In the current specification the interpretation of such elements by the RP is optional.

9.10.4 Specification development

The specification of the Yadis protocol is released in version 1.0 and dates from March 2006. Yadis was developed by members of the OpenID and the LID community. In October 2005 the i-Names initiative joined the project.

108 9.10.5 Open Source Implementations

The project maintains a list of Yadis Implementations [Yadis Implementations].

9.10.6 Opportunities for PrimeLife

Yadis is a small protocol enabling the transparent use of different authentication systems. The current limitation of Yadis is that the identifier used by the authentication mechanism must be resolvable to a URI. If that limitation does not restrict the authentication mechanism proposed by the PrimeLife project, a collaboration with the Yadis project could speed up the market adoption significantly. The implementation overhead seems to be minimal.

Unfortunately, the protocol seems to have lost traction. The last update of their Wiki dates back to July 30, 2009 (checked on January 12, 2010). PrimeLife should not investigate into leveraging Yadis as it tries to unify other protocols. Those protocols should be influenced and extended by functionality such as privacy awareness and user control.

109 Chapter 10 Applications

There is a large number of open source projects that deal with privacy in one way or the other. Some of these projects provide implementations of standards, these we describe in the respective section about the standard. Refer to SAML (see Chapter 9), Liberty Federation (see Chapter 9), XACML (see Chapter 5) or CARML (see Chapter 6) for details. In this section we summarise applications where we see potential contributions from PrimeLife.

More specifically, PrimeLife selected the forum platform phpBB, the open source social network Elgg, the wiki platform MediaWiki, and the Firefox browser for some of its open source contributions. The reasons behind the selection is out of the scope of this document. 10.1 phpBB

The phpBB [phpBB] software is a well-known and widely used free forum platform. According to its Web site, phpBB has "millions of installations worldwide" and thus allows a large number of people to interact and share - personal and non-personal - data with each other. It is developed and supported by an open source community and available with a GPL copyleft license. Besides the main software package, there are so-called mods available. A mod is a modification or extension of the key features of the phpBB software.

The content structures of a phpBB forum is as follows: First, the content base (which is also called bulletin board) is divided into different forums, which themselves may be subdivided into sub-forums. Those forums or sub-forums contain topics. A topic is a starting post of a chain of replies to the first and following posts. A phpBB forum provides three main roles for management: Technical obligations, such as general settings, forum-related, post-related tasks, users and group management, access control rules, design settings, and maintenance, are part of the duties of an administrator. Forums and sub-forms are assigned moderators. Moderators are responsible for assuring compliance of the content with ethical quality and forum rules. Thus, they have the possibility to change, subjects or contents of posts, to lock or even to delete posts besides others. The third main role in a phpBB platform is the one of a user who can read other users’ posts and who can submit own posts.

110 PhpBB provides a platform for people to interact with each other and post personal and non- personal data on the Internet. In general, there are two main issues with regard to privacy that could be addressed by PrimeLife. Firstly, we want to enhance forum users' awareness for privacy and privacy problems that might occur due to the publishing of personal information in a (public) forum. Secondly, even if people are aware of potential privacy risks and want to limit the audience for their contributions, they currently have no options to control the access to their forum posts. Therefore, we work on a new concept for access control, which shifts the responsibility to define access control policies from the forum provider or administrator towards the users, who actually contribute to the forum and whose privacy might be at risk. In order to reach our goals, we work on prototypes that should be avalaible to the open source community, e.g., a privacy-awareness mod that each provider can download and easily add to a phpBB forum installation or a new access control concept that could be demonstrated to and discussed with the phpBB developer community. 10.2 MediaWiki

The MediaWiki software has originally been developed for Wikipedia. It is mainly written in PHP and provided as open source software. PrimeLife decided to use the MediaWiki software for the implementation of a privacy enhanced access control prototype and for the demonstration of a privacy-friendly incentives system.

The first implementation enhances MediaWiki with the PRIME-based access control based on policies and credentials. This way, MediaWiki users are enabled to restrict the access to a wiki page owned by the user to users proving the possession of indicated properties certified by (an) indicated party/ies. That way, the users are not obliged to rely on user accounts (provided by MediaWiki) when restricting access. This approach opens up potentials to foster social contacts without requesting any kind of managing identity data with the application platform. So, it allows to decouple functionality of identity management from actual application functions.

The second extension implements the functionality to rate page revisions as well as to display the average rating of a page revision, to display the rating history, and to update the reputation of the author of the revision using the ratings weighted by raters´ reputation. It uses policies and credentials and requires the MediaWiki - Privacy enhanced access control extension. 10.3 Elgg

Elgg [Elgg] is an open source social networking platform, which is available under the GPL2 license. Using Elgg, organisations can setup their own social network environment (e.g., schools or sports clubs). By default, Elgg supports things such as profiles, groups, blogs, or a Twitter-like functionality. New functionalities can be added to Elgg by installing plug-ins. Already quite a lot of plug-ins are available from the Elgg Web site.

By default, Elgg only provides very coarse grained access controls (public, authenticated users, friends and private). We want to provide users with a more fine grained approach, where access could be determined on a per-user basis. The second thing we want to add is to give the user the ability to perform audience segregation. This could be done by giving a user multiple faces, where each face appears to the outside world to be a separate account. Each face can have its own friends and content, such that the user could separate groups of contacts. This reflects the different image we maintain in real life depending on the context (e.g., being among a bunch of friends vs. presenting to a customer).

111 Since both modification require changes in the core Elgg code, it will not be possible to provide these functionalities as plug-ins that can be installed by just copying a directory to the plug-in directory. It would therefore be preferable if these modification would be added to the official Elgg code. 10.4 Firefox Plugins

Firefox plug-ins provide a variety of functionality for the browser. Observing the reviews and the number of downloads we can say that extensions with a clearly specified and limited purpose tend to be better accepted by the community. Clearly, such extensions allow for a intuitive user interface, which is yet another plus when it comes to user acceptance.

We focus on extensions in the category of Privacy & Security [Firefox Addons]. The most interesting plug-ins for us can be grouped into three subcategories: Identity Management, Privacy and Trust. The Identity Management category contains all plug-ins allowing for easier handling of the different digital identities. The Privacy categories summarises efforts allowing for a better protection of the personal data and the Trust category shows work going in the direction of visualisation of trust relationships between users and Web sites.

The plug-ins developed within the PrimeLife project, namely Scramble! (see Chapter 2) and the Privacy Dashboard (see Chapter 2) (planned for project year 3), provide encryption for user generated content and informed decisions about information release, respectively. We compare Scramble! with plug-ins that have a similar functionality. We do not give such a comparison for the Privacy Dashbord, as this plug-in will provide functionality which does not compare well to the plug-ins detailed in the following.

10.4.1 Identity Management and Formfiller Enhancement

One aspect of identity management identified by PRIME was filling forms. Having defined different personas, filling in forms can be vastly simplified. As an example, the form filling extension Autofill Forms [Autofill Forms], published under the Mozilla Public License, can be mentioned. This plug-in provides automatic form filling based on pattern matching of the internal names for fields and the names of the fields in the form. The form of the plug-in can be completed for several personas and upon usage the user can decide which persona is to be used.

A more elaborate identity management plug-in is called Sxipper [Sxipper]. Similar to Autofill Forms, the form filling functionality is based on predefined personas. Sxipper, however, does not try to detect which field of the online form matches the fields of the persona. To enable the automatic form filling on a particular site, one user (a so called 'trainer') has to provide his personal information into the form manually. Thereafter, Sxipper extracts the common fields of the form with the most appropriate persona of this user and then generates an assumed mapping. This mapping can subsequently be used by other users of Sxipper to fill in the form automatically. In addition to the administration of personas, Sxipper offers password management functionality. The Firefox password manager is used as backend to securely store the passwords. Sxipper can handle OpenIDs, which means that on an OpenID-enabled site the user will be presented her OpenIDs, allowing for a login with a single click. The OpenID functionality is accompanied with phishing protection detecting unusual redirects whereupon the user will be warned. This plug-in is free but not open source and will be complemented with a commercial version that will have features such as a disposable e-mail.

112 Verisign's SeatBelt [SeatBelt] provides OpenID support, meaning that it detects if a visited Web site is OpenID-enabled. If this is the case, the user is logged in automatically. There exists the possibility to change the OpenID with respect to which the login is executed using a menu button. SeatBelt provides phishing protection similar to Sxipper.

There are more plug-ins for simple form filling. InFormEnter [InFormEnter] adds clickable icons next to the form fields. Those icons allow a user to choose from the different contents that she has stored in the plug-in database before. As an example, the user can store his personal information by using the plug-in. At any later point in time the same information is available by clicking the icon next to the field. This plug-in is an improvement of the auto- complete function provided by Firefox.

AutoFormer [AutoFormer] is a simple tool for saving form information entered into one page and make the form information reusable. The user selects to save the data entered into an online form. This data is then saved in a cookie and thus available at a later point in time. A severe drawback of this approach is that a restrictive handling of cookies also results in restrictions for the plug-in.

10.4.2 Privacy Enhancement

The privacy of Internet users is threatened in several ways. Firstly, information gathered at the server side (e.g., search queries) can be used to identify people. This may even happen after so called anonymisation techniques have been applied to the data [AOL4417749]. In addition, a server can use information that is revealed during the connection setup (e.g., IP address, browser version) to link different requests to the same user. Secondly, servers can store information on a user's host and thereby collect information about the browsing behaviour. Thirdly, assuming that a user does not trust her host (e.g., because she uses a publicly accessible terminal), then all locally stored browsing information becomes a privacy threat. Let us discuss several Firefox extensions that mitigate those privacy threats.

• As already mentioned, one way of identifying users at the server side is based on the connection information. Consequently, anonymisation on the network layer, as implemented by TOR [TOR] or AN.ON (JAP/JONDOS) [AN.ON], seems to be a very useful privacy protection. There are plug-ins allowing for better usability of the functionality provided by TOR. The TOR project recommends the use of the Torbutton plugin [Torbutton], which also provides a button for enabling and disabling the usage of TOR for browsing. Unfortunately, the Torbutton plug-in does not provide feedback upon failure of initiation of the TOR usage. Nonetheless, the plug-in supports the anonymisation initiative of TOR by, e.g., stopping other plug- ins that might add identifying information, clear cookies, clear cache, spoofing the timezone and other local values or preventing identification using JavaScript- or CSS-techniques. • Also a protection on the network layer is provided by using a proxy, which leads to anonymity within the set of users of the proxy. All connections going through the proxy appear to be coming from one user. Phzilla [Phzilla] makes use of such an approach. If a user wants to contact some server using the proxies defined in Phzilla, she simply prepends the address with PH::, which indicates the use of the proxy. Unfortunately, the proxy can de-anonymise the users, thus, requiring the user to trust the proxy. • FoxyProxy [FoxyProxy] was originally developed as a proxy switching plug-in. In addition, it offers a TOR-wizard.

113 • TrackMeNot [TrackMeNot] tries to hide the queries of a user by sending randomised queries to the major search engines, namely AOL, Google, MSN and Yahoo!. While a first version of TrackMeNot used a static list to generate the queries, the most recent version issues queries based on actual queries issued by the user. The plug-in tries to extract possible future queries based on those searches. In our opinion a better behaviour would be to issue search queries for terms that are orthogonal to the user's interests in order to make all users as similar as possible for the search engine. • CookieSafe [CS] envisions a powerful yet simple management of cookies. The extension offers detailed control over the cookies extending the capabilities that Firefox offers in this domain. As an example, a general cookie handling policy can be defined in addition to setting rules for each Web site individually. There exists even the possibility to change existing cookies or to define new cookies. A reduced set of functions has been re-implemented and is published as CS Lite [CS Lite]. The CS Lite code is currently used to build the basis for the next version of CookieSafe. CookieSafe and CS Lite are published under the GNU General Public License. • Yet another way of threatening the privacy of a user is the use of HTTP referrer headers. Those headers can be used to signal where from a user reached a Web site. RefControl [RefControl] allows to control which site might send HTTP referrers and which sites are not allowed to do so. • As the possibility of private browsing has been introduced into Firefox [PrivateBrowsing], mistrust with respect to the used host can be efficiently dealt with. Private browsing will delete, for example, the history of visited pages, form and search bar entries, passwords, cookies and cached files. • Similar to private browsing, Stealther [Stealther] allows the reduction of traces that are left on the local host. In contrast to private browsing it temporarily suspends the functions that allow retrieval of browsing behaviour. Examples of suspended functionalities are the browsing history, cookies, form filling information, sending referrer headers or cached data. In contrast to other tools this plug-in does not effect any information which has been stored before the activation of Stealther. After deactivation via a menu button, the data retrieval is reset to the default.

10.4.3 Trust Enhancement

A plugin called Web Of Trust [WOT] takes a step in establishing a trust relation between a new user of a page and the page itself. The assumption is that all users knowing and using a Web site should rate the page. A user stumbling upon a Web site can see the trust ratings that this page acquired and, if the rating suggests, adapt the behaviour accordingly. A main idea behind this plug-in is that a large enough crowd is improbable to lie. Consequently, a user might mistrust the rating if only very few people have participated. A clear vote in favour of the trustworthiness of a page on the other hand is very probable to be a true statement. A user can make the confidence dependent on the confidence rating given by the plug-in. In addition to the user rating there are lists of trustworthy sites used by the plug-in. It is important to note that currently there are four directions that can be rated: the trustworthiness, the vendor reliability, the privacy and suitability for children. This plug-in gets especially interesting when searching the Web using Google, MSN or Yahoo where small WOT icons are shown next to the search results.

10.4.4 Other Firefox Plug-ins

• BugMeNot [BugMeNot] is a site offering username/password combinations to certain sites. The BugMeNot plugin allows easy access to this service. A user presented a login screen can, subsequently, use the plug-in to acquire a valid

114 username/password combination. Another functionality of BugMeNot is the offer of temporary e-mail accounts. These accounts can be given when creating an account in order not to provide the real e-mail address. Such a temporary e-mail inbox is also made available by another Firefox plug-in called MailCatch [MailCatch]. Both services relieve the user of the hassle of providing a personal e-mail address which is only used to verify that the registration is carried out by a human. • CryptFire [Roboform] allows encryption of selected text using symmetric encryption algorithm AES. The encryption is made using a secret key (password) that can be shared. For decryption a user must select the encrypted text, click the option on the context menu and insert the valid password. Thus, the user must know the password and how to identify the encrypted text to execute decryption. • EncryptThis! [EncryptThis!] Provides an asymmetric and symmetric encryption mechanism. It uses RSA as the public key algorithm and AES as the symmetric one. It allows generation of keys. Both algorithms are implemented in Javascript, which makes them computationally expensive. It only allows one to one encryptions (no group encryption), and does not provide a public key ring management for the keys of friends. However, it does not seem to work with Firefox version 3. • FireGPG [FireGPG] allows the usage a PKI (OpenPGP/mime compliant) with a Web mail client. It can be used to authenticate people at Web sites. It is rather similar to Scramble! (a plug-in developed within the PrimeLife project), however, it is mainly built for usage within google mail. In contrast to Scramble!, it does not provide audience segregation tools, automatic decryption, and an option to keep the text to a limited size (i.e., what the tiny link option within Scramble! does). • Privacy Plus [Privacy+] provides users with better privacy control. In particular, it removes Flash LSO cookies (Local Shared Objects) commonly used by websites to track users between sessions. Privacy+ adds a simple checkbox (Clear Flash Local Shared Objects) to the [Clean Recent History] dialog, in order to clean the Flash LSO files. • QuickJava [QuickJava] enables and disables Java and Javascript by using a button on the status bar. • NoScript [NoScript] allows precise control over the domains that are allowed to execute Java or JavaScript. It uses a white listing approach to realise this functionality. • Facecloak [Facecloak] implements a mechanism of random messages and encryptions, in order to protect users' privacy on social network sites. It presents a human readable random messages that will work as index to the real data on a trusted server and as a fake message to avoid possible issues with the Terms of Use of a service, where encrypted storage may be prohibited. It has a similar goal as Scramble!, but with key leakage, message generation and storage server issues. Key management and secret sharing is done by secret key sharing.

Some additional plug-ins are interesting, e.g., Roboform [Roboform]. As they are not released as open source, the possibility for PrimeLife to have an impact are very limited. Therefore we do not have a closer at such extensions.

10.4.5 Opportunities for PrimeLife

The Firefox extension mechanism provides the possibility to PrimeLife to realise a distinct privacy feature. Given a precise definition, a feature can be implemented easily and a large community can be addressed. The fast and easy deployment could lead to a large number of people getting interested in the work of the PrimeLife project. However, there is an inherent limitation of such a plug-in to Web-based services. PrimeLife created the Firefox plug-ins

115 Scramble! and Privacy Dashboard, thereby, using the excellent mechanism to reach a broad community. 10.5 MozPETs: Mozilla Privacy Enhancement Technologies

MozPETs [MozPETs] is a project that was started by IT Transfer Office's [ITO] Prima Project [Prima] of the Technical University of Darmstadt. It was partly supported by research funding from the European Commission. TU Darmstadt closed the IT Transfer Office in 2006, so there is no institutional support for MozPETs anymore. There is an inactive Sourceforge project [MozPETs Code] remaining.

The goal of MozPETs was to integrate all existing Privacy Enhancing Technologies into one browser. It used the Mozilla open source browser and added various extensions which allow for anonymous browsing (MozAJP), checking Web pages for harmful content (MozPAw) or reveal cross-site tracking (Tracknosis). Unfortunately, surfing the Web with this heavily privacy-enhanced browser failed. The developer team documented their experiences in a paper published 2005 called MozPETs - a Privacy enhanced Web Browser [MozPETsPST05]. Of most interest is their analysis:

Privacy research has been done on a lot of different technologies. Most research focused on anonymity for network access, publishing, authorisation, and payment. E-mail fraud and phishing has lead to increased research on technical countermeasures. Data licenses were proposed to guarantee access rights to personal information. An identity management system combines privacy enhancement and security technologies to minimise the disclosure of personal information, and provides the user with means to make informed decisions whether certain data is disclosed or not. The current identity management tools differ a lot in features and scope.

However, the Web looks different. Privacy invasive technologies, such as tracking users with Web bugs and third party cookies across multiple sessions and different sites, are common practice among site operators. Data mining is a key element of many commercial sites’ business plans. In general, the privacy and security features of today’s Web browsers are the same as of the Netscape Navigator 6 released in late 2000. The user can modify the cookie policy, wallet components store logins, passwords and other personal information. Many security tutorials advise users to disable cookies and active content, including JavaScript. Applying these settings makes the Web nearly unusable, as most sites will not work correctly. After this experience the disappointed user will restore the old settings.

MozPETs also tried to use P3P to give better information to the user within the iJournal extension. The iJournal tries to detect certain types of personal information within user's input, and looks for matching parts of the server`s P3P policy. With this information the user can evaluate if he really wants to submit the data. Instead of a user policy the iJournal gives the user context information for this special transaction. It also stores a copy of the relevant policy to have it available for later disputes, which was also done in the PRIME [PRIME] prototype.

As the Mozilla project is now focused on the Firefox browser, an investment into further development of MozPETs does not make too much sense. Instead, PrimeLife will rather look into the powerful extension mechanism for Firefox. But some code of MozPETs may be recovered. The important conclusion from this very practical exercise is also that privacy

116 enhancing technologies have to take Web functionalities into account to keep the browsing experience on a decent level. Usability considerations have to take the Web's reality into account and weigh the trade-off between functionality and privacy preservation. 10.6 Noserub

Noserub [Noserub] aims at realising a decentralised social network. It is built of three things: the Identity-URL, the Profile and the Contacts. The Identity-URL currently is a Noserub-URL, which can be hosted at any server running Noserub. At the location where this URL points, there needs to be some meta-information that identifies the URL as a Noserub-URL. In the future it is envisioned that any URL could be used as an Identity-URL.

The metadata at the Identity-URL is composed of the profile and the contacts which can both either be embedded or referenced. It needs to be given in XFN/Microformat or FOAF where the Web accounts of this identity can be specified using a specific field in the FOAF or a field in a specific way in XFN. The contacts use the similar mechanism with XFN and FOAF to indicate that the URI refers to a contact. Noserub allows for a certain interoperability which means that a contact which no Noserub Identity-URL can be added and their accounts (e.g. at flickr, del.icio.us, ...) can still be monitored.

The documentation of Noserub is not extensive so far, which makes it challenging to judge the project. It is obvious that the project does not implement any privacy features yet. However, it is claimed that they will be added at some point in the future. In this area, it is a challenging task to federate some formulated privacy requirements over the different services that are integrated into Noserub. The other difficulty is clearly to formulate the privacy requirements. Although the initiative of bringing different social networks together is valuable, from a privacy point of view it adds much complexity.

117 References

[AAPML Spec] AAPML: Attribute Authority Policy Markup Language, 28 November 2006. http://www.oracle.com/technology/tech/standards/idm/igf/pdf/IGF-AAPML-spec-08.pdf

[AN.ON] Projekt: AN.ON - Anonymität.Online, accessed 27 February 2010. http://anon.inf.tu-dresden.de/

[AOL4417749] A Face Is Exposed for AOL Searcher No. 4417749, New York Times Article, 09 August 2006, accessed 22 October 2009. http://www.nytimes.com/2006/08/09/technology/09aol.html

[APD+09] PrimeLife Policy Language. Claudio A. Ardagna, Eros Pedrini, Sabrina De Capitani di Vimercati, Pierangela Samarati, Laurent Bussard, Gregory Neven, Franz-Stefan Preiss, Stefano Paraboschi, Mario Verdicchio, Dave Raggett, Slim Trabelsi, W3C Workshop on Access Control Application Scenarios, 2009. http://www.w3.org/2009/policy-ws/papers/Trabelisi.pdf

[APPEL] A P3P Preference Exchange Language 1.0 (APPEL1.0), 15 April 2002. http://www.w3.org/TR/P3P-preferences/

[Adblock] Adblock Plus 0.7.5.4, Wladimir Palant, accessed 15 October 2009. https://addons.mozilla.org/en-US/firefox/addon/1865/

[Adoption] Liberty Adoption Information, accessed 27 January 2010. http://projectliberty.org/liberty/adoption

[Application] IETF Applications Area, accessed 26 January 2010. http://www.apps.ietf.org/

[Authentic] Authentic - Home, accessed 26 January 2010. http://authentic.labs.libre-entreprise.org/

[Authentication] David Corcoran and Eirik Herskedal: Consumer Authentication - Using OpenID and Trusted Platform Modules, accessed 27 January 2010. http://www.trustedcomputinggroup.org/files/static_page_files/ F7DA20B2-1D09-3519-AD3AD4AE41B97010/TrustBearer-OpenIDwTPM- RSA09-1.pdf

[AutoFormer] AutoFormer 0.4.1.8, M. Onyshchuk, 21 June 2009, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/1958/

118 [Autofill Forms] Autofill Forms 0.9.5.2, Sebastian Tschan, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/4775/

[BCGS09] Patrik Bichsel, Jan Camenisch, Thomas Gross, Victor Shoup: Anonymous Credentials on a Standard Java Card, ACM Conference on Computer and Communications Security 2009. http://www.akiras.de/publications/papers/ BCGS2009-Anonymous_Credentials_on_a_Standard_Java_Card.CCS_09.pdf

[Bandit] Bandit Project Home, accessed 9 November 2009. http://www.bandit-project.org/

[Bandit Development] Bandit Project's Code pages, accessed 15 May 2008. https://code.bandit-project.org/trac

[BeMaBu09] Moritz Y. Becker, Alexander Malkis, and Laurent Bussard: A Framework for Privacy Preferences and Data-Handling Policies, Technical Report, 2009. http://research.microsoft.com/apps/pubs/default.aspx?id=102614

[Becker09] Moritz Y. Becker: SecPAL: Formalization and Extensions, Technical Report, 2009. http://research.microsoft.com/apps/pubs/default.aspx?id=102613

[BugMeNot] BugMeNot 2.2, Eric Hamiter, 4 September 2009, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/6349/

[CARML Spec] Client Attribute Requirements Markup Language (CARML) Specification, 24 November 2006. http://www.oracle.com/technology/tech/standards/idm/igf/pdf/IGF-CARML-spec-03.pdf

[CC] Common Criteria - Home, accessed 26 January 2010. http://www.commoncriteriaportal.org/

[CDFJPS09] Valentina Ciriani, Sabrina De Capitani di Vimercati, Sara Foresti, Sushil Jajodia, Stefano Paraboschi, Pierangela Samarati: Keep a Few: Outsourcing Data while Maintaining Confidentiality, in Proc. of the 14th European Symposium On Research In Computer Security (ESORICS 2009), Saint Malo, France, September 21-25, 2009. http://spdp.dti.unimi.it/papers/esorics09.pdf

[CMNPS10] Jan Camenisch, Sebastian Mödersheim, Gregory Neven, Franz-Stefan Preiss, and Dieter Sommer: Credential-Based Access Control Extensions to XACML, accessed 5 February 2010. http://www.w3.org/2009/policy-ws/papers/Neven.pdf

119 [CS] CookieSafe 3.0.5, Ron Beckman, 27 January 2009, accessed 22 October 2009. https://addons.mozilla.org/de/firefox/addon/2497/

[CS Lite] CS Lite 1.4, Ron Beckman, 27 January 2009, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/5207/

[CSS 2] Cascading Style Sheets Level 2 Revision 1 (CSS 2.1) Specification, W3C Candidate Recommendation, 8 September 2009. http://www.w3.org/TR/CSS/

[Caja] Caja Code Site, accessed 26 January 2010. http://code.google.com/p/google-caja/

[CardSpace] Windows CardSpace Overview, accessed 5 February 2010. http://www.microsoft.com/windows/products/winfamily/cardspace/default.mspx

[Compact Policy] Section "Compact Policies" in "The Platform for Privacy Preferences 1.0 (P3P1.0) Specification", 16 April 2002. http://www.w3.org/TR/P3P/#compact_policies

[Concordia] Project Concordia Home, accessed 5 February 2010. http://projectconcordia.org/index.php/Concordia

[Concordia] Concordia Project, accessed 9 November 2009. http://projectconcordia.org/

[DFJPS07] Sabrina De Capitani di Vimercati, Sara Foresti, Sushil Jajodia, Stefano Paraboschi, Pierangela Samarati: Over-encryption: Management of Access Control Evolution on Outsourced Data, in Proc. of the 33rd International Conference on Very Large Data Bases (VLDB 2007), Vienna, Austria, September 23-28, 2007. http://spdp.dti.unimi.it/papers/vldb07.pdf

[DOM Level 3 Core] Document Object Model (DOM) Level 3 Core Specification, W3C Recommendation, 7 April 2004. http://www.w3.org/TR/DOM-Level-3-Core

[Data Protection] WAVE SYSTEMS Case Study - CBI Health: Data Protection for Regulatory Compliance, accessed 27 January 2010. http://www.trustedcomputinggroup.org/resources/ data_protection_for_regulatory_compliance

120 [DigitalMe] DigitalMe Project, accessed 9 November 2009. http://code.bandit-project.org/trac/wiki/DigitalMe/

[ECMA TC39] TC39 - ECMAScript, accessed 26 January 2010. http://www.ecma-international.org/memento/TC39.htm

[ECMAScript] ECMAScript Language Specification 3rd edition, December 1999. http://www.ecma-international.org/publications/standards/Ecma-262.htm

[ECMAScript 5th Edition] ECMAScript Language Specification, December 2009. http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-262.pdf

[EPAL] Enterprise Privacy Authorization Language (EPAL 1.2), 10 November 2003. http://www.w3.org/Submission/2003/SUBM-EPAL-20031110/

[ESOE] Enterprise Sign On Engine - Home, accessed 26 January 2010. http://www.esoeproject.org/

[ETSI/3GPP] ETSI - e-Standardisation Portal, accessed 26 January 2010. http://portal.etsi.org/portal_common/home.asp?tbkey1=SCP

[Elgg] Elgg: a powerful social engine, accessed 11 December 2009. http://elgg.org/

[EncryptThis!] EncryptThis, accessed 22 October 2009. http://www.langenhoven.com/code/encryptthis/encryptthis.php

[EnhancedHistory] Enhanced History, AnonEMoose, 2 November 2006, accessed 26 January 2010. https://addons.mozilla.org/en-US/firefox/addon/420

[Enterprise-Java-XACML] Enterprise-Java-XACML from Google Code, (beta) version 0.0.14, February 2008. http://code.google.com/p/enterprise-java-xacml/

[Eurosmart] Eurosmart - Published Figures, accessed 26 January 2010. http://www.eurosmart.com/index.php/publications/market-overview.

[Facecloak] FaceCloak, Prof. Urs Hengartner, August 2009, accessed 22 October 2009. http://crysp.uwaterloo.ca/software/facecloak/index.html

[FederID] FederID - Home, accessed 26 January 2010. http://federid.objectweb.org/xwiki/bin/view/Main/WebHome

121 [FireGPG] FireGPG 0.7.7, The_glu, 26 July 2009, accessed 22 October 2009. http://getfiregpg.org/s/home

[Firefox Addons] Firefox Privacy & Security Addons, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/browse/type:1/cat:12/

[FoxyProxy] FoxyProxy, accessed 27 February 2010. http://foxyproxy.mozdev.org/

[Future of P3P Workshop 2002] W3C Workshop on the Future of P3P, 12-13 November 2002. http://www.w3.org/2002/p3p-ws/

[Future of P3P Workshop 2003] W3C Workshop on the long term Future of P3P and Enterprise Privacy Languages, 19-20 June 2003. http://www.w3.org/2003/p3p-ws/

[Geuer-Pollmann/Claessens] Web services and web service security standards. Information Security Technical Report, 2005, 10, 15-24. http://dx.doi.org/10.1016/j.istr.2004.11.001

[Global Platform] Global Platform - Home, accessed 26 January 2010. http://www.globalplatform.org/

[Guidescope] Guidescope - Take control of the Web, accessed 15 October 2009. http://www.guidescope.com/

[H6.1.1] PrimeLife Consortium: Identity Management Infrastructure Protocols for Privacy- enabled SOA, PrimeLife report D6.1.1, 2009. http://www.primelife.eu/images/stories/deliverables/ d6.1.1-idm_infrastructure_protocols_for_privacy-enabled_soa-public.pdf

[H6.3.1] PrimeLife Consortium: Requirements for privacy-enhancing Service-oriented architectures, PrimeLife report H6.3.1, 2009. http://www.primelife.eu/images/stories/deliverables/ h6.3.1-requirements_for_privacy_enhancing_soas-public.pdf

[HTML 4.01] HTML 4.01 specification, W3C Recommendation, 24 December 1999. http://www.w3.org/TR/html401/

[HTML 5] HTML 5, A vocabulary and associated APIs for HTML and XHTML, W3C Working Draft, 22 January 2008. http://www.w3.org/TR/html5/

122 [HTML WG] W3C HTML Working Group, accessed 26 January 2010. http://www.w3.org/html/wg/

[HTML5] Hyper Text Markup Language 5 - News and Opinions, accessed 26 January 2010. http://www.w3.org/html

[HTTP bis] Hypertext Transfer Protocol Bis charter, 26 December 2007. http://www.ietf.org/html.charters/httpbis-charter.html

[HTTP bis Security Properties] Security Requirements for HTTP (draft-ietf-httpbis-security-properties-03), 7 March 2009. http://tools.ietf.org/html/draft-ietf-httpbis-security-properties

[HTTP state WG] HTTP State Management Mechanism (httpstate), modified 11 December 2009, accessed 26 January 2010. http://www.ietf.org/dyn/wg/charter/httpstate-charter.html

[Heraldry] Heraldry Project Incubation Status, accessed 26 January 2010. http://incubator.apache.org/projects/heraldry.html

[Higgins] Higgins open source identity management project, accessed 26 January 2010. http://www.eclipse.org/higgins

[I2P] Invisible Internet Project Anonymous Network, accessed 15 October 2009. http://www.i2p2.de/

[IAB] Internet Architecture Board, accessed 26 January 2010. http://www.iab.org/

[IBM WebSphere] IBM WebSphere, accessed 26 January 2010. http://www.ibm.com/websphere

[ICF] Information Card Foundation, accessed 23 October 2009. http://informationcard.net/

[ICF Overview] Information Card Foundation - Quick Overview, accessed 27 February 2010. http://informationcard.net/quick-overview

[ID-FF Profiles] Liberty ID-FF Bindings and Profiles Specification, 2004. https://www.projectliberty.org/liberty/content/download/319/2369/file/draft-liberty-idff- bindings-profiles-1.2-errata-v2.0.pdf

123 [ID-SIS10] Liberty Alliance ID-SIS 1.0 Specifications, accessed 27 January 2010. http://www.projectliberty.org/resource_center/specifications/ liberty_alliance_id_sis_1_0_specifications

[ID-WSF tools] Conor Cahill - Liberty Open Source Toolkit, accessed 26 January 2010. http://www.cahillfamily.com/OpenSource/

[ID-WSF11] Liberty Alliance ID-WSF 1.1 Specifications, accessed 27 January 2010. http://www.projectliberty.org/resource_center/specifications/ liberty_alliance_id_wsf_1_1_specifications

[IESG] The Internet Engineering Steering Group, accessed 26 January 2010. http://www.ietf.org/iesg.html

[IETF] The Internet Engineering Task Force, accessed 26 January 2010. http://www.ietf.org/

[IETF Oauth WG] Open Authentication Protocol (oauth), accessed 27 January 2010. http://www.ietf.org/dyn/wg/charter/oauth-charter.html

[IETF SEC] IETF - Security Area, accessed 26 January 2010. http://www.tools.ietf.org/area/sec/

[IGF] Identity Governance Framework, accessed 26 January 2010. http://www.oracle.com/technology/tech/standards/idm/igf/index.html

[ISO/IEC WG4] ISO/IEC JTC 1/SC 17/WG 4 - Integrated circuit(s) cards with contact, 18 April 2006. http://isotc.iso.org/livelink/livelink/fetch/2000/2122/327993/327971/1054366/ 17n3025.pdf?nodeid=7392684&vernum=0

[ITO] IT Transfer Office (ITO), closed in 2006. http://www.ito.tu-darmstadt.de/

[Idemix] Idemix (identity mixer): Pseudonymity for e-transactions, accessed 26 January 2010. http://www.zurich.ibm.com/security/idemix/

[Identity Mixer Lib.] Identity Mixer Library at TU Dresden, accessed 26 January 2010. https://prime.inf.tu-dresden.de/idemix/

[InFormEnter] InFormEnter 0.5.5.5, M. Onyshchuk, 7 Januarz 2009, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/673/

124 [Infogrid] NetMesh InfoGrid LID, accessed 26 January 2010. http://netmesh.org/downloads/

[InterLDAP] InterLDAP - Home, accessed 26 January 2010. http://wiki.interldap.objectweb.org/xwiki/bin/view/Main/WebHome

[JRC] Joint Research Centre, accessed 26 January 2010. http://ec.europa.eu/dgs/jrc/index.cfm

[Kerberos TP 1.1] Web Services Security Kerberos Token Profile 1.1, OASIS Standard Specification, 1 February 2006. http://docs.oasis-open.org/wss/v1.1/wss-v1.1-spec-os-KerberosTokenProfile.pdf

[LASSO] Liberty Alliance Single Sign-On (LASSO) - Home, accessed 26 January 2010. http://lasso.entrouvert.org/

[LemonLDAP] LemonLDAP - Home, accessed 26 January 2010. http://wiki.lemonldap.objectweb.org/xwiki/bin/view/NG/Presentation

[Liberty Alliance] The Liberty Alliance, accessed 27 January 2010. http://www.projectliberty.org/

[Lightbulb] Lightbulb - Federated Identity Integration for LAMP and MARS, accessed 26 January 2010. https://lightbulb.dev.java.net/

[MS .NET] Microsoft .NET Framework, accessed 26 January 2010. http://www.microsoft.com/net/

[MailCatch] MailCatch: Temporary Emails 1.0.3, NetCore Team, 19 October 2009, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/10394/

[Microformats] Microformats, accessed 26 January 2010. http://www.microformats.org

[MozPETs] MozPETs: Mozilla Privacy Enhancement Technologies, accessed 15 October 2009. http://mozpets.sourceforge.net/

[MozPETs Code] MozPETS Sourceforge Project Page, accessed 26 January 2010. http://sourceforge.net/projects/mozpets/

125 [MozPETsPST05] MozPETs - a Privacy enhanced Web Browser, In Proceedings of the Third Annual Conference on Privacy, Security and Trust (PST05). http://www.ito.tu-darmstadt.de/publs/pdf/BruecknerVoss_Mozpets.pdf

[NFC] The Near Field Communication (NFC) Forum, accessed 26 January 2010. http://www.nfc-forum.org/home

[Netmesh license] Netmesh Sleepycat License, accessed 26 January 2010. http://netmesh.org/downloads/netmesh-infogrid-lid/license.txt

[NoScript] NoScript 1.9.9.47, Giorgio Maone, accessed 27 February 2010. https://addons.mozilla.org/de/firefox/addon/722/

[Noserub] Noserub - Decentral Social Network, accessed 26 January 2010. http://noserub.com/

[OASIS] Organisation for the Advancement of Structured Information Standards (OASIS), accessed 26 January 2010. http://www.oasis-open.org/

[OASIS IPR charter] OASIS Intellectual Property Rights (IPR) Policy, 4 August 2009. http://www.oasis-open.org/who/intellectualproperty.php

[OASIS XACML] OASIS XACML Technical Committee page, accessed 26 January 2010. http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=xacml

[OAuth Code] OAuth - Source Code, accessed 26 January 2010. http://oauth.net/code/

[OAuth Core 1.0] OAuth Core 1.0, 4 December 2007. http://oauth.net/core/1.0/

[OAuth WRAP] OAuth WRAP, accessed 27 January 2010. http://wiki.oauth.net/OAuth-WRAP

[OAuth.NET] OAuth - Home, accessed 26 January 2010. http://oauth.net/

[OSIS Feature Test] OSIS - Create New Feature Tests, accessed 9 November 2009. http://osis.idcommons.net/wiki/How_to_Create_New_FeatureTests

126 [OSIS Participants] OSIS Participants, accessed 9 November 2009. http://osis.idcommons.net/wiki/Category:Participant

[OSIS at Identity Commons] OSIS: Open Source Identity Systems, accessed 9 November 2009. http://osis.idcommons.net/

[OWL] OWL Web Ontology Language Overview, W3C Recommendation, 27 October 2009. http://www.w3.org/TR/owl-overview/

[OpenID 1.0] OpenID Attribute Exchange 1.0 - Final, 5 December 2007. http://openid.net/specs/openid-attribute-exchange-1_0.html

[OpenID 2.0] OpenID Authentication 2.0 - Final, 5 December 2007. http://openid.net/specs/openid-authentication-2_0.html

[OpenID Foundation] OpenID Foundation, accessed 26 January 2010. http://openid.net/foundation/

[OpenID libraries] OpenID Libraries, accessed 26 January 2010. http://wiki.openid.net/Libraries

[OpenID4Java] OpenID4Java Source Code, accessed 26 January 2010. http://code.sxip.com/openid4java/

[OpenLiberty-J] OpenLiberty-J - Home, accessed 26 January 2010. http://www.openliberty.org/wiki/index.php/Main_Page

[OpenSAML] OpenSAML - Home, accessed 26 January 2010. https://spaces.internet2.edu/display/OpenSAML/Home

[P3P] Platform for Privacy Preferences (P3P) Project, 20 November 2007. http://www.w3.org/P3P/

[P3P 1.0 Spec] The Platform for Privacy Preferences 1.0 (P3P1.0) Specification, 16 April 2002. http://www.w3.org/TR/P3P

[P3P 1.1 Spec] The Platform for Privacy Preferences 1.1 (P3P1.1) Specification, 13 November 2006. http://www.w3.org/TR/P3P11/

[PLING] PLING - W3C Policy Languages Interest Group, accessed 26 January 2010. http://www.w3.org/Policy/pling/

127 [PRIME] PRIME - Privacy and Identity Management for Europe, EU FP6 IST-507591, 2004-2008, accessed 13 October 2009. https://www.prime-project.eu/

[PRIME policies] C.A. Ardagna, M. Cremonini, S. De Capitani di Vimercati, and P. Samarati. A privacy- aware access control system. Journal of Computer Security (JCS), 16(4):369–392, 2008. http://seclab.dti.unimi.it/Papers/ACDS-JCS2008.pdf

[Pamela Project] The Pamela Project, accessed 11 December 2009. http://pamelaproject.com/

[Phzilla] Phzilla, InBasic, 29 July 2009, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/3239/

[Position Papers] W3C Workshop on Access Control Application Scenarios - Position Papers, 17 and 18 November 2009. http://www.w3.org/2009/policy-ws/papers

[Prima] The PRIMA project, accessed 15 October 2009. http://www.ito.tu-darmstadt.de/projects/prima/

[Privacy Bird] Privacy Bird - Find web sites that respect your privacy, accessed 26 January 2010. http://www.privacybird.org/

[Privacy+] Privacy + 1.0.1, DownloadMan, 11 September 2009, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/14217/

[PrivateBrowsing] Firefox Private Browsing, accessed 9 November 2009. http://support.mozilla.com/en-US/kb/Private+Browsing

[Privoxy] Privoxy - Home Page, accessed 15 October 2009. http://www.privoxy.org/

[QuickJava] QuickJava 0.4.2.1, Doug G, 3 August 2006, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/1237/

[RDF-PRIMER] RDF Primer, W3C Recommendation, 10 February 2004. http://www.w3.org/TR/rdf-primer/

[REL TP 1.1] Web Services Security Rights Expression Language (REL) Token Profile 1.1, OASIS Standard: 1 February 2006.

128 http://www.oasis-open.org/committees/download.php/16687/oasis-wss-rel-token- profile-1.1.pdf

[RFC] Request for Comments, accessed 26 January 2010. http://www.ietf.org/rfc.html

[RFC 2616] Hypertext Transfer Protocol -- HTTP/1.1 specification, June 1999. http://www.ietf.org/rfc/rfc2616.txt

[RFC 2818] HTTP Over TLS, May 2000. http://www.ietf.org/rfc/rfc2818.txt

[RFC 3935] A Mission Statement for the IETF, October 2004. http://www.ietf.org/rfc/rfc3935.txt

[RIF] RIF Working Group, accessed 26 January 2010. http://www.w3.org/2005/rules/wiki/RIF_Working_Group

[RIF Use Cases] RIF Use Cases and Requirements, 20 April 2008. http://www.w3.org/2005/rules/wiki/UCR

[RefControl] RefControl 0.8.12, James Abbatiello, 14 July 2009, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/953/

[Rei] Rei: A Policy Specification Language, May 2005, accessed 26 January 2010. http://rei.umbc.edu/

[Rein] The Rein Policy Framework for the Semantic Web, 28 July 2006. http://dig.csail.mit.edu/2006/06/rein/

[Removing Data Transfer from P3P] Removing Data Transfer from P3P, 21 September 1999. http://www.w3.org/P3P/data-transfer.html

[Roboform] Roboform Toolbar for Firefox 6.9.96, Siber Systems, 21 July 2009, accessed 22 October 2009.CryptFire - Encryption made easy 1.0, mibwick, 9 June 2009, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/750/

[SAML 2.0 Bindings] Bindings for the OASIS Security Assertion Markup Language (SAML) V2.0, 15 March 2005. http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf

129 [SAML TP 1.1] Web Services Security SAML Token Profile 1.1, OASIS Standard, 1 February 2006. http://docs.oasis-open.org/wss/v1.1/wss-v1.1-spec-os-SAMLTokenProfile.pdf

[SAML V2.0] OASIS SAML V2.0 Specification, 15 March 2005. http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=security#samlv20

[SICSACML] SICSACML: XACML 3.0 Patch for Sun's XACML 2.0 Implementation, accessed 26 January 2010. http://www.sics.se/node/2465

[SOAP12] SOAP Version 1.2, Second Edition, W3C Recommendation, 27 April 2007. http://www.w3.org/TR/soap12

[SPARQL] SPARQL Query Language for RDF, W3C Recommendation, 15 January 2008. http://www.w3.org/TR/rdf-sparql-query/

[SUN XACML] Sun’s XACML Implementation, accessed 13 October 2009. http://sunxacml.sourceforge.net/

[SeatBelt] VeriSign's OpenID SeatBelt, VeriSign, Inc., accessed 22 October 2009. https://pip.verisignlabs.com/seatbelt.do

[Semantic Web] W3C Semantic Web Activity, accessed 26 January 2010. http://www.w3.org/2001/sw/

[Shibboleth] Shibboleth - Home, accessed 26 January 2010. http://shibboleth.internet2.edu/

[SocialWebPrivacy] Social Web Privacy, accessed 26 January 2010. http://dig.csail.mit.edu/2009/SocialWebPrivacy/

[Stealther] Stealther 1.0.7, Filip Bozic, 19 August 2009, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/1306/

[Sun XACML v1.2] Sun Implementation of XACML, version 1.2, 14 February 2006. http://sourceforge.net/projects/sunxacml/

[Sun XACML v2.0] Sun Implementation of XACML, version 2.0, 9 August 2009. http://sunxacml.svn.sourceforge.net/viewvc/sunxacml/trunk/

130 [Sxipper] Sxipper 2.2.3, Sxip Identity, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/4865/

[TCG] The Trusted Computing Group, accessed 27 January 2010. https://www.trustedcomputinggroup.org/home

[TCG Storage] TCG Storage Specifications and Key Management, December 2009. http://www.trustedcomputinggroup.org/files/resource_files/ D70AC70C-1D09-3519-AD472D02B35B9332/Key%20Management_20091228.pdf

[TOR] Tor: anonymity online, accessed 15 October 2009. http://www.torproject.org/

[TREPALXACML] A Comparison of Two Privacy Policy Languages: EPAL and XACML, Anne Anderson, accessed 26 January 2010. http://research.sun.com/techrep/2005/smli_tr-2005-147/ TRCompareEPALandXACML.html

[TechHerald] The Tech Herald - Article, Companies Team up to create Information Cards, 2 July 2008. http://www.thetechherald.com/article.php/200827/1381/Companies-Team-up-to-create- Information-Cards

[Torbutton] Torbutton 1.2.2, Scott Squires and Mike Perry, 9 August 2009, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/2275/

[TrackMeNot] TrackMeNot 0.6.291, Daniel C. Howe and Helen Nissenbaum, 28 August 2009, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/3173/

[Trusted AC] Ned Smith: Putting Trust into the Network: Securing Your Network through Trusted Access Control, 27 April 2005. http://cserg0.site.uottawa.ca/ncac05/smith_18500034.ppt

[URI] Uniform Resource Identifier (URI): Generic Syntax, January 2005. http://www.ietf.org/rfc/rfc3986.txt

[W3C] The World Wide Web Consortium, accessed 26 January 2010. http://www.w3.org/

131 [W3C DAP] Device APIs and Policy Working Group, accessed 26 January 2010. http://www.w3.org/2009/dap/

[WAF WG] Web Application Formats Working Group, accessed 19 May 2008. http://www.w3.org/2006/appformats

[WEBARCH] Architecture of the World Wide Web, Volume 1, W3C Recommendation 15 December 2004. http://www.w3.org/TR/webarch/

[WOT] Web of Trust - WOT 20090918, Against Intuition, 21 September 2009, accessed 22 October 2009. https://addons.mozilla.org/en-US/firefox/addon/3456/

[WS-MetadataExchange 1.1] Web Services Metadata Exchange, Version 1.1, August 2006. http://download.boulder.ibm.com/ibmdl/pub/software/dw/specs/ws-mex/ metadataexchange.pdf

[WS-Policy 1.5] Web Services Policy 1.5 - Framework, W3C Recommendation, 4 September 2007. http://www.w3.org/TR/ws-policy/

[WS-Policy Primer] Web Services Policy 1.5 - Primer, 12 November 2007. http://www.w3.org/TR/2007/NOTE-ws-policy-primer-20071112/

[WS-Policy WG] Web Services Policy Working Group, accessed 26 January 2010. http://www.w3.org/2002/ws/policy/

[WS-PolicyAttachment 1.2] Web Services Policy 1.2 - Attachment, W3C Member Submission, 25 April 2006. http://www.w3.org/Submission/WS-PolicyAttachment/

[WS-PolicyAttachment 1.5] Web Services Policy 1.5 - Attachment, 4 September 2007. http://www.w3.org/TR/ws-policy-attach/

[WS-SecureConversation 1.4] WS-SecureConversation 1.4, OASIS Standard, 2 February 2009. http://docs.oasis-open.org/ws-sx/ws-secureconversation/v1.4/ws-secureconversation.pdf

[WS-Security] Web Services Security Core Specification 1.1, OASIS Standard, 1 February 2007. http://www.oasis-open.org/committees/download.php/16790/wss-v1.1-spec-os- SOAPMessageSecurity.pdf

132 [WS-SecurityPolicy 1.3] WS-SecurityPolicy 1.3, OASIS Standard, 2 February 2009. http://docs.oasis-open.org/ws-sx/ws-securitypolicy/v1.3/ws-securitypolicy.pdf

[WS-Trust 1.4] WS-Trust 1.4, OASIS Standard, 2 February 2009. http://docs.oasis-open.org/ws-sx/ws-trust/v1.4/ws-trust.pdf

[WS-XACML v1.0] Web Services Profile of XACML (WS-XACML), Version 1.0, Working Draft 10, 10 August 2007. http://www.oasis-open.org/committees/download.php/24951/xacml-3.0-profile- webservices-spec-v1-wd-10-en.pdf

[WSC] Web Security Context Working Group, accessed 26 January 2010. http://www.w3.org/2006/WSC/

[WSFED TC] OASIS Web Services Federation (WSFED) TC Public Documents, accessed 5 February 2010. http://www.oasis-open.org/committees/documents.php?wg_abbrev=wsfed

[Web API] Web API Working Group, accessed 19 May 2008. http://www.w3.org/2006/webapi

[WebApps WG] W3C Web Applications (WebApps) Working Group, accessed 26 January 2010. http://www.w3.org/2008/webapps/

[Wikipedia] Wikipedia - Internet Junkbuster, accessed 15 October 2009. http://en.wikipedia.org/wiki/Internet_Junkbuster

[Workshop on Access Control Application Scenarios] W3C Workshop on Access Control Application Scenarios - Workshop Report Published, 17 and 18 November 2009. http://www.w3.org/2009/policy-ws/

[X.509 Certificate TP 1.1] Web Services Security X.509 Certificate Token Profile 1.1, OASIS Standard Specification, 1 February 2006. http://docs.oasis-open.org/wss/v1.1/wss-v1.1-spec-os-x509TokenProfile.pdf

[XACML Obligation Families] Working Draft of XACML v3.0 Obligation Families version 1.0, 28 December 2007. http://www.oasis-open.org/committees/download.php/27230/xacml-3.0-obligation- v1-wd-03.zip

[XACML comments] Providing Feedback to Members of the OASIS eXtensible Access Control Markup Language (XACML) TC, accessed 5 February 2010. http://www.oasis-open.org/committees/comments/index.php?wg_abbrev=xacml

133 [XACML to RIF] Fatih Turkmen, Lalana Kagal, and Bruno Crispo: Interoperable Access Control Policies: A XACML and RIF Demonstration, accessed 26 January 2010. http://dig.csail.mit.edu/2009/AFOSR/Fatih/Final/RR2009.pdf

[XACML v2.0] eXtensible Access Control Markup Language (XACML), Version 2.0, 1 February 2005. http://docs.oasis-open.org/xacml/2.0/access_control-xacml-2.0-core-spec-os.pdf

[XACML v3.0] eXtensible Access Control Markup Language (XACML) Version 3.0, 16 April 2009. http://www.oasis-open.org/committees/document.php?document_id=32425

[XACMLPriv v3.0] OASIS XACML v3.0 Privacy Policy Profile Version 1.0, Committee draft 1, 16 April 2009. http://www.oasis-open.org/committees/document.php?document_id=32425

[XADES] XML Advanced Electronic Signatures (XAdES) - Details of 'RTS/ESI-000034' Work Item, accessed 26 January 2010. http://webapp.etsi.org/workprogram/Report_WorkItem.asp?WKI_ID=21353

[XHTML 1.0] XHTML 1.0 The Extensible HyperText Markup Language (Second Edition), W3C Recommendation, 1 August 2002. http://www.w3.org/TR/xhtml1/

[XHTML 2] XHTML2 Working Group Home Page, accessed 26 January 2010. http://www.w3.org/MarkUp

[XHTML11] XHTML 1.1 - Module-based XHTML. W3C Recommendation 31 May 2001. http://www.w3.org/TR/xhtml11/

[XML Enc] XML Encryption Syntax and Processing, W3C Recommendation, 10 December 2002. http://www.w3.org/TR/xmlenc-core/

[XML Sig] XML-Signature Syntax and Processing, W3C Recommendation, 12 February 2002. http://www.w3.org/TR/xmldsig-core/

[XML Sig Transform] Decryption Transform for XML Signature, 10 December 2002. http://www.w3.org/TR/xmlenc-decrypt

[XML Sig/Enc Workshop] W3C Workshop on Next Steps for XML Signature and XML Encryption, 25-26 September 2007. http://www.w3.org/2007/xmlsec/ws/report.html

134 [XMLHttpRequest] The XMLHttpRequest Object specification, W3C Working Draft, 15 April 2008. http://www.w3.org/TR/XMLHttpRequest/

[XMLSec] XML Security Specifications Maintenance Working Group, accessed 26 January 2010. http://www.w3.org/2007/xmlsec/

[XmlSec] XML Security Working Group, accessed 26 Janaury 2010. http://www.w3.org/2008/xmlsec/

[Yadis Implementations] Yadis Implementations Overview, accessed 26 January 2010. http://yadis.org/wiki/Yadis_Implementations

[Yadis Spec] Yadis Specification Version 1.0, 18 March 2006. http://yadis.org/wiki/Yadis_1.0_(HTML)

[Yadis.ORG] Yadis 1.0 - The Identity and Accountability Foundation for Web 2.0, accessed 26 January 2010. http://yadis.org/

[ZXID] ZXID Home - Open Source IdM for the Masses, accessed 26 January 2010. http://www.zxid.org/

[openinfocard] Open Source Information Card Selector and Relyingparty and Security Token Server Java Library, accessed 27 January 2010. http://code.google.com/p/openinfocard/

[phpBB] phpBB - Creating Communities, accessed 9 November 2009. http://www.phpbb.com/

135