Data Anonymizing API – Phase II

Total Page:16

File Type:pdf, Size:1020Kb

Data Anonymizing API – Phase II Data Anonymizing API – Phase II TM Forum Digital Transformation World 2018 Nice, France 14-16 May © 2018 TM Forum | 1 Catalyst Champion & Participants Orange, Emilie Sirvent-Hien, Anonymization project manager and Sophie Nachman, Standards Manager To get standardized anonymization API allows sharing of data internally within Orange and externally with partners in order to unleash services innovation , together with guarantying privacy of customers, in compliance with GDPR, General Data Protection Regulation Vodafone Atul Ruparelia, Data Architect and Imo Ekong, Big Data Communication Specialist (Analytics CoE) Contribute towards standard open API for anonymization/pseudonymization (using rich TMF assets) allowing data sharing with internal/external partners to drive service innovation but also protecting PII data in compliance to GDPR. Cardinality Steve Bowker, CEO & Co-Founder, and Dejan Vujic, Head of Data Science Cardinality have implemented one of the largest Hadoop based analytics solutions in a telco in Europe which leverages a containerised microservices based architecture making extensive use of APIs, including different data anonymization, pseudonymization and encryption within their solution. Brytlyt Richard Heyns, CEO & Founder Brytlyt Brytlyt leverage advanced processing on GPUs in natively parallelizable algorithms which form the foundation of their high performance data analytics and machine learning Liverpool John Moores University, Professor Paul Morrissey, Amongst other things Paul is the Global Ambassador for the TMForum with responsibility for Big Data Analytics and Customer Experience Management. Provided input on business drivers, CurateFX, Osterwalder Business Canvass. Data Science and Software Programmers 5G Innovation Centre (5GIC) hosted by University of Surrey 5GIC members are collaborating closely to drive forward advanced wireless research, reduce the risks of implementing 5G (through their 5G testbed) and contribute to global 5G standardisation. The 5GIC has to date attracted an additional £68m from industry and regional partners. © 2018 TM Forum | 2 Catalyst Background Kicking off where we left before – Data Anonymization Phase I Catalyst Mobile network operators are increasing using third parties to help deliver their overall solution offerings. As a service provider, they can’t share data or allow access to their systems without data protection (for example data anonymization) as they have to protect the privacy of their subscribers and to adhere to the regulatory environment which is demanding with new mandatory regulations requiring data privatization. © 2018 TM Forum | 3 Data Anonymization - Marketing Campaign Example Use Case Transaction Web Transaction Data Sources DataMart Data Analytics Data Data Anonymization API Marketing Database/Solution Provider Data Repository Platform Customer data may contain Personal Identifiable information(PII) such as name address and date of birth. A marketing campaign may decide to leverage on specific data to recommend a new product or services to customers. Data needs to be retrieved and transferred from the source to a data repository externally or 3rd party for Marketing company to be analysed for insights to offer some products to customers (Cross sell, up-sell) and drive some marketing with current or new customers for revenue growth, Vodafone would use 3rd parties for creation of marketing offers For legal privacy and security purposes personal identifiable information may need to be anonymised before data is transferred to a © 2018 TM Forum | 4 third party or external database. What is Data Anonymization? A process by which personally identifiable information (PII) is irreversibly altered in such a way that a PII principal can no longer be identified directly or indirectly, either by the PII controller alone or in collaboration with any other party (ISO 29100:2011) In accordance with Article G29 Opinion*, there are Three Main Principles: v Singling out: Is it possible to isolate someone in particular? v Linkability: Is it possible to linK, at least, two records concerning the same data subject ? v Inference: Is it possible to deduce information about one person? Once a dataset is truly anonymized, GDPR no longer applies Anonymization technologies : 3 Technical Families v Generalization (K-anonymization) v Randomization (noise addition, Differential Privacy) v Obfuscation *http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp216_en.pdf © 2018 TM Forum | 5 What is Pseudo-Anonymization ? Data Masking ? Pseudonymization is a data management and de-identification procedure by which personally identifiable information fields within a data record are replaced by one or more artificial identifiers, or pseudonyms. A single pseudonym for each replaced field or collection of replaced fields makes the data record less identifiable while remaining suitable for data analysis and data processing. Pseudonymized data can be restored to its original state with the addition of information which then allows individuals to be re-identified, while anonymized data can never be restored to its original state. We can enlarge pseudonymization with data masking technics • Substitution • Shuffling • Number and date variance • Encryption • Nulling and deletion • Scrambling When is it used to share data? § inside a company as soon as possible § with trusted partner (e.g. Vodafone – BT Openreach) § Pseudonymised data are still personal data but less sensible © 2018 TM Forum | 6 Privacy by design methodology Qualify the use case • What type of data? Collected on which basis? Who has access to it? • What to you want to do with protected data? • Do you need a reversible step? • What are the reidentification risks? Find the Trade-off between utility and privacy • Choose the right technology • Validate with a privacy risk assessment © 2018 TM Forum | 7 Catalyst Objective OBJECTIVE: Build a prototype and develop a new TMForum OPEN API that will allow serviCe Providers to protect sensitive data so it Can be shared with third parties without Compromising privaCy and meeting new regulatory requirements suCh as GDPR. Technical Objective: v Building upon industry agreed standards enabling a sCalable platform, improving interoperability, effiCienCy and transparenCy. v Investigating various methods of anonymizing and protecting data for use in the API v Prototyping initial API for providing Anonymized data to third parties without Compromising the privaCy of the data Content v Assessing the effeCtiveness of anonymization of various approaChes that Could be used, and the impaCt on the "value" of the data shared if Data is protected 100% v Exploring ways to support a 2-way OPEN API for Anonymization (maybe pseudo anonymization / data proteCtion ) Business Objective: v Exploring New Business Models for TelCo Data-as-a-ServiCe v Exploring potential uses outside TelCo domain e.g. for Banking Services, Media Content providers etC © 2018 TM Forum | 8 Use Cases Data Protection for different needs § Operational usage: When treating personal data for non-production usage samples needed for test, training, sharing with internal or external supplier, employees outside Europe § Marketing usage: Customer knowledge improvement or new services development § Business usage: Data monetization with external customer § GDPR right to be forgotten § Artificial Intelligence Platform Sharing or AI Algorithm Model Training : example 5G Location and Content Prediction Algorithms § Data publishing for example scientific studies and open data © 2018 TM Forum | 9 Use Case Example 1 : Offer content As A Third Party Marketing Company (Exec) I Need To Offer Content (film/tv and VoD) to a specific group of subscribers in a specific Geographic Market based on various criteria So That I Can Increase Content Usage and Increase Revenue for our customers To Do This, I Need Access Data from the Telco's customer based (subscribers) to To understand which subscribers (based on various criteria) are appropriate to target. In need this capability to be conducted under the auspice of National Data Governance rules and regulations to guarantee the Data anonymisation of the Telco subscribers I Know I Am There is an increase in sales of the content to a point where the service Successful When is profitable and is accepted by the Telco Governance and Security policies © 2018 TM Forum | 10 Business Canvas - To drive revenue growth and to unleash offers/products (eTOM terminology) © 2018 TM Forum | 11 Data Flow Data Originated Data Data Deanonymized Pseudonymized Data Analyzed via Machine Learning © 2018 TM Forum | 12 Anonymization API (see Demo at Catalyst stand level 2) © 2018 TM Forum | 13 CATALYST Data for Demo Demo needed to be based on publicly available data, so we selected the ADULT data set from the University of California, Irvine – Machine Learning Repository © 2018 TM Forum | 14 Enriched UCI “Adult” dataset used (extra synthetic Telco Fields) • Based on UCI “Adult” dataset, enhanced with Telco Fields e.g. IMEI, IMSI etc…. INPUT FILE to API OUTPUT FILE after processing © 2018 TM Forum | 15 Example Anonymization using “K-Anonymization” • AGE data is binned into ranges of 10 years INPUT FILE to API OUTPUT FILE after processing © 2018 TM Forum | 16 Example of processing of data using “Discretization” • Human Readable Information Mapped to discrete values on sensitive fields INPUT FILE to API OUTPUT FILE after processing © 2018 TM Forum | 17 Example Pseudonimization using one-way
Recommended publications
  • Nothing Personal: the Concepts of Anonymization and Pseudonymization in European Data Protection
    Master Thesis Law and Technology LLM Nothing personal: The concepts of anonymization and pseudonymization in European Data Protection Supervisors: Student: F. Stoitsev st Lorenzo Dalla Corte (1 ) ANR: 729037 Colette Cuijpers (2nd) August 2016 Filip Stoitsev (ANR 729037) Master Thesis: Law and Technology Nothing personal: The concepts of anonymization and pseudonymization in European Data Protection Table of Contents List of Abbreviations ............................................................................................................................ 2 Chapter 1 - Introduction ...................................................................................................................... 3 Chapter 2 – Defining the Concepts ...................................................................................................... 9 2.1. The concept of personal data .................................................................................................... 9 2.2. Anonymization ......................................................................................................................... 13 2.3. Pseudonymization .................................................................................................................... 14 2.4. Data Protection Directive ........................................................................................................ 14 2.4.1. Anonymization .................................................................................................................... 14 2.4.2.
    [Show full text]
  • 1 Viewing the GDPR Through a De-Identification Lens
    Viewing the GDPR Through a De-Identification Lens: A Tool for Clarification and Compliance Mike Hintze1 In May 2018, the General Data Protection Regulation (GDPR) will become enforceable as the basis for data protection law in the European Economic Area (EEA). The GDPR builds upon many existing concepts in European data protection law and creates new rights for data subjects. The result is new and heightened compliance obligations for organizations handling data. In many cases, however, how those obligations will be interpreted and applied remains unclear. De-identification techniques provide a range of useful tools to help protect individual privacy. There are many different de-identification techniques which represent a broad spectrum – from relatively weak techniques that can reduce privacy risks to a modest degree, to very strong techniques that can effectively eliminate most or all privacy risk. In general, the stronger the de-identification, the greater the loss of data utility and value. Therefore, different levels of de-identification may be appropriate or ideal in different scenarios, depending on the purposes of the data processing. While there is disagreement on certain aspects of de-identification and the degree to which it should be relied upon in particular circumstances, there is no doubt that de-identification techniques, properly applied, can reduce privacy risks and help protect data subjects’ rights. Regulatory guidance and enforcement activity under the GDPR can further these key objectives by encouraging and rewarding the appropriate use of de-identification. Guidance that fully recognizes the appropriate roles of de-identification can also help bring greater clarity to many GDPR requirements.
    [Show full text]
  • Healthy Data Protection
    Michigan Technology Law Review Article 3 2020 Healthy Data Protection Lothar Determann Freie Universität Berlin Follow this and additional works at: https://repository.law.umich.edu/mtlr Part of the Comparative and Foreign Law Commons, Health Law and Policy Commons, Legislation Commons, and the Privacy Law Commons Recommended Citation Lothar Determann, Healthy Data Protection, 26 MICH. TELECOMM. & TECH. L. REV. 229 (2020). Available at: https://repository.law.umich.edu/mtlr/vol26/iss2/3 This Article is brought to you for free and open access by the Journals at University of Michigan Law School Scholarship Repository. It has been accepted for inclusion in Michigan Technology Law Review by an authorized editor of University of Michigan Law School Scholarship Repository. For more information, please contact [email protected]. HEALTHY DATA PROTECTION Lothar Determann* Modern medicine is evolving at a tremendous speed. On a daily basis, we learn about new treatments, drugs, medical devices, and diagnoses. Both established technology companies and start-ups focus on health- related products and services in competition with traditional healthcare businesses. Telemedicine and electronic health records have the potential to improve the effectiveness of treatments significantly. Progress in the medical field depends above all on data, specifically health information. Physicians, researchers, and developers need health information to help patients by improving diagnoses, customizing treatments and finding new cures. Yet law and policymakers
    [Show full text]
  • Rx-Anon—A Novel Approach on the De-Identification of Heterogeneous Data Based on a Modified Mondrian Algorithm
    rx-anon—A Novel Approach on the De-Identification of Heterogeneous Data based on a Modified Mondrian Algorithm F. Singhofer A. Garifullina, M. Kern A. Scherp [email protected] {aygul.garifullina,mathias.kern}@bt.com [email protected] University of Ulm BT Technology University of Ulm Germany United Kingdom Germany ABSTRACT measure to protect PII is to anonymize all personal identifiers. Prior Traditional approaches for data anonymization consider relational work considered such personal data to be name, age, email ad- data and textual data independently. We propose rx-anon, an anony- dress, gender, sex, ZIP, any other identifying numbers, among oth- mization approach for heterogeneous semi-structured documents ers [12, 16, 31, 34, 52]. Therefore, the field of Privacy-Preserving composed of relational and textual attributes. We map sensitive Data Publishing (PPDP) has been established which makes the as- terms extracted from the text to the structured data. This allows sumption that a data recipient could be an attacker, who might also us to use concepts like :-anonymity to generate a joined, privacy- have additional knowledge (e. g., by accessing public datasets or preserved version of the heterogeneous data input. We introduce observing individuals). the concept of redundant sensitive information to consistently Data to be shared can be structured in the form of relational anonymize the heterogeneous data. To control the influence of data or unstructured like free texts. Research in data mining and anonymization over unstructured textual data versus structured predictive models shows that a combination of structured and un- data attributes, we introduce a modified, parameterized Mondrian structured data leads to more valuable insights.
    [Show full text]
  • NIST SP 800-188, De-Identification of Government Datasets
    1 nd 2 NIST Special Publication 800-188 (2 DRAFT) 3 4 De-Identifying Government Datasets 5 6 7 8 Simson L. Garfinkel 9 10 11 12 13 14 15 16 17 I N F O R M A T I O N S E C U R I T Y nd 18 NIST Special Publication 800-188 (2 DRAFT) 19 20 De-Identifying Government Datasets 21 22 23 Simson L. Garfinkel 24 Information Access Division 25 Information Technology Laboratory 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 December 2016 43 44 45 46 47 48 U.S. Department of Commerce 49 Penny Pritzker, Secretary 50 51 National Institute of Standards and Technology 52 Willie May, Under Secretary of Commerce for Standards and Technology and Director 1 53 Authority 54 This publication has been developed by NIST in accordance with its statutory responsibilities under the 55 Federal Information Security Modernization Act (FISMA) of 2014, 44 U.S.C. § 3551 et seq., Public Law 56 (P.L.) 113-283. NIST is responsible for developing information security standards and guidelines, including 57 minimum requirements for federal information systems, but such standards and guidelines shall not apply 58 to national security systems without the express approval of appropriate federal officials exercising policy 59 authority over such systems. This guideline is consistent with the requirements of the Office of Management 60 and Budget (OMB) Circular A-130. 61 Nothing in this publication should be taken to contradict the standards and guidelines made mandatory and 62 binding on federal agencies by the Secretary of Commerce under statutory authority.
    [Show full text]
  • Addressing the Failure of Anonymization: Guidance from the European Union’S General Data Protection Regulation
    BRASHER_FINAL ADDRESSING THE FAILURE OF ANONYMIZATION: GUIDANCE FROM THE EUROPEAN UNION’S GENERAL DATA PROTECTION REGULATION Elizabeth A. Brasher* It is common practice for companies to “anonymize” the consumer data that they collect. In fact, U.S. data protection laws and Federal Trade Commission guidelines encourage the practice of anonymization by exempting anonymized data from the privacy and data security requirements they impose. Anonymization involves removing personally identifiable information (“PII”) from a dataset so that, in theory, the data cannot be traced back to its data subjects. In practice, however, anonymization fails to irrevocably protect consumer privacy due to the potential for deanonymization—the linking of anonymized data to auxiliary information to re-identify data subjects. Because U.S. data protection laws provide safe harbors for anonymized data, re-identified data subjects receive no statutory privacy protections at all—a fact that is particularly troublesome given consumers’ dependence on technology and today’s climate of ubiquitous data collection. By adopting an all-or-nothing approach to anonymization, the United States has created no means of incentivizing the practice of anonymization while still providing data subjects statutory protections. This Note argues that the United States should look to the risk-based approach taken by the European Union under the General Data Protection Regulation and introduce multiple tiers of anonymization, which vary in their potential for deanonymization, into its data protection laws. Under this approach, pseudonymized data—i.e., certain data * J.D. Candidate 2018, Columbia Law School; B.A. 2012, Bucknell University. Many thanks to Professor Ronald Mann for his insight throughout the Note-writing process.
    [Show full text]