Data De-Identification, Pseudonymization, and Anonymization: Legal and Policy Considerations

Total Page:16

File Type:pdf, Size:1020Kb

Data De-Identification, Pseudonymization, and Anonymization: Legal and Policy Considerations Data De-identification, Pseudonymization, and Anonymization: Legal and Policy Considerations Daniel Barth-Jones Assistant Professor of Clinical Epidemiology Mailman School of Public Health, Columbia University Kate Godfrey Vice President, Government Affairs, Chief Compliance & Privacy Officer Adaptive Biotechnologies Nancy L. Perkins Counsel Arnold & Porter May 26, 2021 De-identification Concepts and Methodologies 2 HIPAA’s Identification Spectrum No De- “Breach LDS Fully Information Identified Safe” Identified (Totally Safe, PHI But Useless) Research, Treatment, Useful for Permitted Uses: Any Public Health, Payment, Purpose Breach Healthcare Operations • Protected Health Information (PHI) Avoidance Operations • Limited Data Set (LDS) §164.514(e) • Eliminate 16 Direct Identifiers (Name, Address, SSN, etc.) • LDS w/o 5-digit Zip & Date of Birth (LDS-“Breach Safe”) 8/24/09 FedReg • Eliminate 16 Direct Identifiers and Zip5, DoB • Safe Harbor De-identified Data Set (SHDDS) §164.514(b)(2) • Eliminate 18 Identifiers (including Geo < 3 digit Zip, All Dates except Yr) • Expert Determination Data Set (EDDS) §164.514(b)(1) • Verified “very small” Risk of Re-identification 3 GDPR’s Identification Spectrum Article 11 Article 3 Article 4 (1) Recital 26 Controller can’t Pseudonymized Personal Data Anonymized Identify Escapes “No Longer Articles Identifiable” 15 – 20 Can’t be Identified Escapes GDPR Similar to without Additional HIPAA EDDS? Information HIPAA’s Identification Spectrum No Information “Breach De- Identified LDS Fully Safe” Identified (Totally Safe, But Useless) Research, Treatment, Any Useful for Public Health, Payment, Permitted Uses: Purpose Breach Healthcare Operations Avoidance Operations 4 <-or in other words, “Science” 5 U.S. State Specific Re-identification Risks: Population Uniqueness Uniqueness Population Risks: Re-identification Specific State U.S. Log Scale * * 1/10-> 1/Million 4/10,000 Risk 0.0000001 0.000001 0.00001 0.0001 1E-09 1E-08 0.001 0.01 0.1 1 CA Graph © DB-J 2013 2013 DB-J Graph © Data Source: 2010 U.S. CensusDecennial U.S. 2010 Source: Data TX NY FL IL *HIPAA Safe Harbor Risk EstimateRiskHarbor Safe *HIPAA PA OH MI GA NC NJ VA WA MA IN AZ † † or GeographicorthanUnits smaller 3-digit Zip Codes (Z3). HIPAA Safe Harbor does not permit any Dates more specificthethan doesSafeyear,Harbormorenot Datesanypermit HIPAA TN MO MD WI MN CO AL SC LA KY OR OK PR CT IA MS AR KS UT NV NM WV NE ID HI ME NH RI MT DE SD AK ND VT DC WY Population Sizes ( Statesbyordered as a Quasi-Identifieras alsoGender included OtherAsian, White,Hispanic,Black, Coding:Race 3-digit = ZipCode Z3 5-digit = ZipCode Z5 ofYearBirth = YoB BirthYr Mnth & = MoB DateofBirth = DoB Safe Harbor Safe Not Safe Harbor Safe Not Quasi-Identifier Compliant Combined YoB,Z3,Race YoB,Z3 MoB,Z3 DoB,Z3 YoB,Z5 MoB,Z5 DoB,Z5 Legend ) † 6 The Inconvenient Truth: “De-identification leads to information loss which Complete may limit the usefulness of the resulting health Protection information” (p.8, HHS De-ID Guidance Nov 26, 2012) Bad Decisions / Bad Science Ideal Situation (Perfect Information & Trade-Off between Perfect Information Protection) Quality and Unfortunately, Privacy Protection Log Scale Log not achievable due to Poor mathematical Disclosure Protection Disclosure Privacy constraints No Protection Protection No Information Optimal Precision, Information Lack of Bias 7 8 Unfortunately, de- identification public policy has often been driven by largely anecdotal and limited evidence, and re- identification demonstration attacks targeted to particularly vulnerable individuals, which fail to provide reliable evidence about real world re- identification risks 9 Misconceptions about HIPAA De-identified Data: “It doesn’t work…” “easy, cheap, powerful re-identification” (Ohm, 2009 “Broken Promises of Privacy”) *Pre-HIPAA Re-identification Risks {Zip5, Birth date, Gender} able to identify 87%?, 63%, 28%? of US Population (Sweeney, 2000, Golle, 2006, Sweeney, 2013) • Reality: HIPAA compliant de-identification provides important privacy protections • Safe harbor re-identification risks have been estimated at 0.04% (4 in 10,000) (Sweeney, NCVHS Testimony, 2007) • Reality: Under HIPAA de-identification requirements, re-identification is expensive and time-consuming to conduct, requires substantive computer/mathematical skills, is rarely successful, and usually uncertain as to whether it has actually succeeded 10 Misconceptions about HIPAA De-identified Data: “It works perfectly and permanently…” • Reality: • Perfect de-identification is not possible. • De-identifying does not free data from all possible subsequent privacy concerns. • Data is never permanently “de-identified”… • There is no 100% guarantee that de-identified data will remain de-identified regardless of what you do with it after it is de- identified. 11 Re-identification Demonstration Attack Summary • Publicized attacks are on data without HIPAA/SDL de-identification protection. • Many attacks targeted especially vulnerable subgroups and did not use sampling to assure representative results. • Press reporting often portrays re-identification as broadly achievable, when there isn’t any reliable evidence supporting this portrayal. 12 Re-identification Demonstration Attack Summary • For Ohm’s famous “Broken Promises” attacks (Weld, AOL, Netflix) a total of n=4 people were re-identified out of 1.25 million. • For attacks against HIPAA de-identified data (ONC, Heritage*), a total of n=2 people were re-identified out of 128 thousand. • ONC Attack Quasi-identifiers: Zip3, YoB, Gender, Marital Status, Hispanic Ethnicity • Heritage Attack Quasi-identifiers*: Age, Sex, Days in Hospital, Physician Specialty, Place of Service, CPT Procedure Codes, Days Since First Claim, ICD-9 Diagnoses (*not complete list of data available for adversary attack) • Both were “adversarial” attacks. • For all attacks listed, a total of n=268 were re-identified out of 327 million opportunities. Let’s get some perspective on this… 13 Obviously, This slide is BLACK . So clearly, De-identification Doesn’t Work. 14 Precautionary Principle or Paralyzing Principle? “When a re-identification attack has been brought to life, our assessment of the probability of it actually being implemented in the real-world may subconsciously become 100%, which is highly distortive of the true risk/benefit calculus that we face.” – DB-J 15 Re-identification Demonstration Attack Summary • What can we conclude from the empirical evidence provided by these 11 highly influential re-identification attacks? • The proportion of demonstrated re-identifications is extremely small. • Which does not imply data re-identification risks are necessarily very small (especially if the data has not been subject to Statistical Disclosure Limitation methods). • But with only 268 re-identifications made out of 327 million opportunities, Ohm’s “Broken Promises” assertion that “scientists have demonstrated they can often re-identify with astonishing ease” seems rather dubious. • It also seems clear that the state of “re-identification science”, and the “evidence”, it has provided needs to be dramatically improved in order to better support good public policy regarding data de-identification. 16 17 *Pre-HIPAA Risk Baseline *HIPAA Safe Harbor Risk Baseline 18 Re-identification Risks: Population Uniqueness Starting Assumptions? 100% State-Specific Box/Whiskers 1 Combined 10% Quasi-Identifier Legend 0.1 DoB = Date of Birth MoB = Birth Mnth & Yr 1% YoB = Year of Birth 0.01 Z5 = 5-digit Zip Code Z3 = 3-digit Zip Code 0.1% Race Coding: 0.001 *HIPAA Safe Harbor Risk Estimate White, Black, Hispanic, * 4/10,000 Asian, Other 0.01% Gender also included 0.0001 as a Quasi-Identifier Log Log Scale 0.001% Not Safe Harbor 0.00001 Compliant † 0.0001% 0.000001 0.00001% 0.0000001 0.000001 % 1E-08 Safe Harbor 0.0000001% 1E-09 Graph © DB-J 2013 † HIPAA Safe Harbor does not permit any Dates more specific than the year, Data Source: 2010 U.S. Decennial Census or Geographic Units smaller than 3-digit Zip Codes (Z3). 19 Anonymization Under the GDPR 20 How Do We Achieve Anonymization? Under GDPR, re-identification of a data subject (even by the company that anonymized the data) must be impossible. That doesn’t sound that hard…what’s all the fuss about? The challenge arises when considering whether the exposed data (what remains after anonymization) can be paired with other available information to construe the identity of data subjects Recital 26 states: o “To determine whether a natural person is identifiable, account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly. To ascertain whether means are reasonably likely to be used to identify the natural person, account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments.” The plot thickens the EPDB published guidance stating that the test for whether a person is “identifiable” does not necessarily mean that an actual subject is identified—just that it would be possible to identify the subject 21 Is Anonymization Possible? Experts Weigh In… IAPP stated true anonymization may be near impossible, especially considering the study highlighted by the NYT that showed it’s possible to personally identify 87 percent of the U.S. population based on just three data points: five-digit ZIP code, gender, and date-of-birth o And while these data points in a vacuum may be non-identifiable, storing them collectively is what makes it possible to identify an individual The UK Information Commissioner’s Office provided a warning to companies: o “In order to be truly anonymised under the GDPR, you must strip personal data of sufficient elements that mean the individual can no longer be identified. However, if you could at any point use any reasonably available means to re-identify the individuals to which the data refers, that data will not have been effectively anonymised but will have merely been pseudonymised.
Recommended publications
  • Data Privacy: De-Identification Techniques
    DEVELOPING AND CONNECTING ISSA CYBERSECURITY LEADERS GLOBALLY Data Privacy: De-Identification Techniques By Ulf Mattsson – ISSA member, New York Chapter This article discusses emerging data privacy techniques, standards, and examples of applications implementing different use cases of de-identification techniques. We will discuss different attack scenarios and practical balances between privacy requirements and operational requirements. Abstract The data privacy landscape is changing. There is a need for privacy models in the current landscape of the increas- ing numbers of privacy regulations and privacy breaches. Privacy methods use models and it is important to have a common language when defining privacy rules. This article will discuss practical recommendations to find the right practical balance between compli- ance, security, privacy, and operational requirements for each type of data and business use case. Figure 1 – Forrester’s global map of privacy rights and regulations [4] ensitive data can be exposed to internal users, partners, California Customer Privacy Act (CCPA) is a wake-up call, and attackers. Different data protection techniques can addressing identification of individuals via data inference provide a balance between protection and transparen- through a broader range of PII attributes. CCPA defines Scy to business processes. The requirements are different for personal information as information that identifies, relates systems that are operational, analytical, or test/development to, describes, is reasonably capable of being associated with, as illustrated by some practical examples in this article. or could reasonably be linked, directly or indirectly, with a We will discuss different aspects of various data privacy tech- particular consumer or household such as a real name, alias, niques, including data truthfulness, applicability to different postal address, and unique personal identifier [1].
    [Show full text]
  • Leveraging GDPR to Become a Trusted Data Steward
    The Boston Consulting Group (BCG) is a global management consulting firm and the world’s leading advisor on business strategy. We partner with clients from the private, public, and not-for- profit sectors in all regions to identify their highest-value opportunities, address their most critical challenges, and transform their enterprises. Our customized approach combines deep insight into the dynamics of companies and markets with close collaboration at all levels of the client organization. This ensures that our clients achieve sustainable competitive advantage, build more capable organizations, and secure lasting results. Founded in 1963, BCG is a private company with offices in more than 90 cities in 50 countries. For more information, please visit bcg.com. DLA Piper is a global law firm with lawyers located in more than 40 countries throughout the Ameri- cas, Europe, the Middle East, Africa and Asia Pa- cific, positioning us to help clients with their legal needs around the world. We strive to be the leading global business law firm by delivering quality and value to our clients. We achieve this through practical and innovative legal solutions that help our clients succeed. We deliver consistent services across our platform of practices and sectors in all matters we undertake. Our clients range from multinational, Global 1000, and Fortune 500 enterprises to emerging compa- nies developing industry-leading technologies. They include more than half of the Fortune 250 and nearly half of the FTSE 350 or their subsidi- aries. We also advise governments and public sector bodies. Leveraging GDPR to Become a Trusted Data Steward Patrick Van Eecke, Ross McKean, Denise Lebeau-Marianna, Jeanne Dauzier: DLA Piper Elias Baltassis, John Rose, Antoine Gourevitch, Alexander Lawrence: BCG March 2018 AT A GLANCE The European Union’s new General Data Protection Regulation, which aims to streng- then protections for consumers’ data privacy, creates an opportunity for companies to establish themselves as trusted stewards of consumer data.
    [Show full text]
  • Why Pseudonyms Don't Anonymize
    Why Pseudonyms Don’t Anonymize: A Computational Re-identification Analysis of Genomic Data Privacy Protection Systems Bradley Malin Data Privacy Laboratory, School of Computer Science Carnegie Mellon University, Pittsburgh, Pennsylvania 15213-3890 USA [email protected] Abstract Objectives: In order to supply patient-derived genomic data for medical research purposes, care must be taken to protect the identities of the patients from unwanted intrusions. Recently, several protection techniques that employ the use of trusted third parties (TTPs), and other identity protecting schemas, have been proposed and deployed. The goal of this paper is to analyze the susceptibility of genomic data privacy protection methods based on such methods to known re-identification attacks. Methods: Though advocates of data de-identification and pseudonymization have the best of intentions, oftentimes, these methods fail to preserve anonymity. Methods that rely solely TTPs do not guard against various re-identification methods, which can link apparently anonymous, or pseudonymous, genomic data to named individuals. The re-identification attacks we analyze exploit various types of information that leaked by genomic data that can be used to infer identifying features. These attacks include genotype-phenotype inference attacks, trail attacks which exploit location visit patterns of patients in a distributed environment, and family structure based attacks. Results: While susceptibility varies, we find that each of the protection methods studied is deficient in their protection against re-identification. In certain instances the protection schema itself, such as singly-encrypted pseudonymization, can be leveraged to compromise privacy even further than simple de-identification permits. In order to facilitate the future development of privacy protection methods, we provide a susceptibility comparison of the methods.
    [Show full text]
  • What to Know About the General Data Protection Regulation (GDPR)? FAQ
    What to know about the General Data Protection Regulation (GDPR)? FAQ www.bitkom.org What to know about the General Data Protection Regulation (GDPR)? 2 Imprint Publisher Bitkom e. V. Federal Association for Information Technology, Telecommunications and New Media Albrechtstraße 10 | 10117 Berlin Contact Sausanne Dehmel | Member of the Executive Board responsible for security and trust T +49 30 27576 -223 | [email protected] Graphics & Layout Sabrina Flemming | Bitkom Coverimage © artjazz – fotolia.com Copyright Bitkom 2016 This publication constitutes general, non-binding information. The contents represent the view of Bitkom at the time of publication. While great care is taken in preparing this information, no guarantee can be provided as to its accuracy, completeness, and/or currency. In particular, this publication does not take into consideration the specific circumstances of individual cases. The reader is therefore personally responsible for its use. Any liability is excluded. What to know about the General Data Protection Regulation (GDPR)? 3 Table of Contents Table of Contents Introduction ________________________________________________________________ 4 When does the GDPR enter into force? __________________________________________ 5 How does the GDPR apply in Germany? _________________________________________ 5 Which rules remain largely unchanged? _________________________________________ 7 Which are the new elements and most important changes? ________________________ 9 Which processes and documents do I have to review? ____________________________ 17 What are the costs and effort I should expect? ___________________________________ 18 Which company departments should be informed about the changes? ______________ 18 Who provides guidance on interpretation? ______________________________________ 19 What to know about the General Data Protection Regulation (GDPR)? 4 FAQ Introduction The extensive provisions of the General Data Protection Regulation (GDPR) cause small and medium-sized enterprises (SMEs) initial difficulties.
    [Show full text]
  • Schrems II Compliant Supplementary Measures
    Schrems II Compliant Supplementary Measures 11 November 2020 © 2020 Anonos www.anonos.com 1 Summary The attached documents serve as evidence of the feasibility and practicability of the proposed measures enumerated in the EDPB’s Recommendations 01/2020 on Measures That Supplement Transfer Tools to Ensure Compliance With the EU Level of Protection of Personal Data; the first document, prepared by Anonos, describes several use cases, and the others include independent audits, reviews and certifications of Anonos state-of-the-art technology that leverages European Union Agency for Cybersecurity (ENISA) recommendations for GDPR compliant Pseudonymisation to enable EDPB proposed measures. Table of Contents 1. Maximizing Data Liquidity - Reconciling Data Utility & Protection 3 2. IDC Report - Embedding Privacy and Trust Into Data Analytics Through 20 Pseudonymisation 3. Data Scientist Expert Opinion on Variant Twins and Machine Learning 29 4. Data Scientist Expert Opinion on BigPrivacy 38 Anonos dynamic de-identification, pseudonymization and anonymization systems, methods and devices are protected by an intellectual property portfolio that includes, but is not limited to: Patent Numbers: CA 2,975,441 (2020); EU 3,063,691 (2020); US 10,572,684 (2020); CA 2,929,269 (2019); US 10,043,035 (2018); US 9,619,669 (2017); US 9,361,481 (2016); US 9,129,133 (2015); US 9,087,216 (2015); and US 9,087,215 (2015); including 70 domestic and international patents filed. Anonos, BigPrivacy, Dynamic De-Identifier, and Variant Twin are trademarks of Anonos Inc. protected by federal and international statutes and treaties. © 2020 Anonos Inc. All Rights Reserved. MAXIMIZING DATA LIQUIDITY RECONCILING DATA UTILITY & PROTECTION August 2020 www.anonos.com © 2020 Anonos 3 MAXIMIZING DATA LIQUIDITY RECONCILING DATA UTILITY & PROTECTION EXECUTIVE SUMMARY Eight years of research and development have created a solution that optimizes both data protection and data use to maximize data liquidity lawfully & ethically.
    [Show full text]
  • Sypse: Privacy-First Data Management Through Pseudonymization and Partitioning
    Sypse: Privacy-first Data Management through Pseudonymization and Partitioning Amol Deshpande, [email protected] University of Maryland, College Park, MD, USA ABSTRACT there has been much work in recent years on techniques like dif- Data privacy and ethical/responsible use of personal information ferential privacy and encrypted databases, those have not found are becoming increasingly important, in part because of new reg- wide adoption and further, their primary target is untrusted en- ulations like GDPR, CCPA, etc., and in part because of growing vironments or users. We believe that operational databases and public distrust in companies’ handling of personal data. However, data warehouses in trusted environments, which are much more operationalizing privacy-by-design principles is difficult, especially prevalent, also need to be redesigned and re-architected from the given that current data management systems are designed primarily ground up to embed privacy-by-design principles and to enforce to make it easier and efficient to store, process, access, and share vast better data hygiene. It is also necessary that any such redesign not amounts of data. In this paper, we present a vision for transpar- cause a major disruption to the day-to-day operations of software ently rearchitecting database systems by combining pseudonymiza- engineers or analysts, otherwise it is unlikely to be widely adopted. tion, synthetic data, and data partitioning to achieve three privacy In this paper, we present such a redesign of a standard relational goals: (1) reduce the impact of breaches by separating detailed per- database system, called Sypse, that uses “pseudonymization”, data sonal information from personally identifying information (PII) partitioning, and synthetic data, to achieve several key privacy and scrambling it, (2) make it easy to comply with a deletion re- goals without undue burden on the users.
    [Show full text]
  • Pseudonymization and De-Identification Techniques
    Pseudonymization and De-identification Techniques Anna Pouliou Athens – November 21, 2017 The Value of Data Benefits of Data Analytics ● Product/consumer safety ● Improvement of customer experience ● Overall improvement of products and services in many sectors, some critical ● Safety and efficiency of operations ● Regulatory compliance ● Predictive maintenance of equipment ● Reduction of costs ● … De-Identification in the General Data Protection Regulation GDPR and De-Identification - Definition ● Recital 26 and Art.5: “The processing of personal data in such a way that the data can no longer be attributed to a specific data subject without the use of additional information.” The additional information must be “kept separately and be subject to technical and organizational measures to ensure non attribution to an identified or identifiable person”. => A privacy enhancing technique where PII is held separately and securely to ensure non-attribution. GDPR and De-Identification - Incentives ● Art 5: It may facilitate processing personal data beyond original collection purposes ● Art 89 (1): It is an important safeguard for processing personal data for scientific, historical and statistical purposes ● Art 25 (1): It is a central feature of “privacy by design” ● Art 32 (1): Controllers can use it to help meet the Regulation’s data security requirements ● Art 15-20: Controllers do not need to provide data subjects with access, rectification erasure or data portability if they can no longer identity a data subject. ● Art 40 (20) d: It encourages
    [Show full text]
  • Pseudonymisation Techniques and Best Practices
    Pseudonymisation techniques and best practices Recommendations on shaping technology according to data protection and privacy provisions NOVEMBER 2019 0 PSEUDONYMISATION TECHNIQUES AND BEST PRACTICES NOVEMBER 2019 ABOUT ENISA The European Union Agency for Cybersecurity (ENISA) has been working to make Europe cyber secure since 2004. ENISA works with the EU, its member states, the private sector and Europe’s citizens to develop advice and recommendations on good practice in information security. It assists EU member states in implementing relevant EU legislation and works to improve the resilience of Europe’s critical information infrastructure and networks. ENISA seeks to enhance existing expertise in EU member states by supporting the development of cross- border communities committed to improving network and information security throughout the EU. Since 2019, it has been drawing up cybersecurity certification schemes. More information about ENISA and its work can be found at www.enisa.europa.eu CONTACT For contacting the authors please use [email protected] For media enquiries about this paper, please use [email protected] CONTRIBUTORS Meiko Jensen (Kiel University), Cedric Lauradoux (INRIA), Konstantinos Limniotis (HDPA) EDITORS Athena Bourka (ENISA), Prokopios Drogkaris (ENISA), Ioannis Agrafiotis (ENISA) ACKNOWLEDGEMENTS We would like to thank Giuseppe D'Acquisto (Garante), Nils Gruschka (University of Oslo) and Simone Fischer-Hübner (Karlstad University) for reviewing this report and providing valuable comments. LEGAL NOTICE Notice must be taken that this publication represents the views and interpretations of ENISA, unless stated otherwise. This publication should not be construed to be a legal action of ENISA or the ENISA bodies unless adopted pursuant to the Regulation (EU) No 2019/881.
    [Show full text]
  • The General Data Protection Regulation Long Awaited EU-Wide Data Protection Law Is Now Applicable GDPR | Introduction
    The General Data Protection Regulation Long awaited EU-wide data protection law is now applicable GDPR | Introduction Introduction May 25th... A defining day in Privacyland. As I write, the long-awaited introduction of the General Data Protection Regulation is upon us. And what a busy time the run-up to this day has been! My team has been supporting many, many organisations as they geared up their data policies and practices to comply with GDPR. From performing gap assessments and running transformation programmes to advising on governance issues. I am really proud of my team and what we have achieved together. Tried, tested and new Catching up A snapshot of organisations today would show varying In the past year, through articles, blogs and vlogs, our degrees of GDPR-readiness. Some are very well team has shared a vast amount of relevant information prepared, but there is room for surprises, as we do not with the public on privacy-related issues. We have now know how exactly GDPR will be enforced in practice. brought them together in this magazine, as an easy way Some organisations still have some ground to cover. for our clients to catch up. But developments do not And there are some still at the very start of their stop here, nor will we. I am really looking forward to the journey. Deloitte will continue to provide them all with next season in Privacyland! our tried and tested GDPR services. Annika Sponselee But we are now entering a new reality for organisations, with new needs. And Deloitte is ready to respond.
    [Show full text]
  • Comforte AG Securdps
    W HITE PAPER COMFORTE SECURDPS ENTERPRISE SOLUTION FO R CCPA BHAVNA SONDHI | CISA, QSA (P2PE), PA - QSA (P2PE), ISO/IEC 27001 LEAD IMPLEMENTER , SECURE SOFTWARE & SECURE SLC ASSESSOR TABLE OF CONTENTS Executive Summary ................................................................................................................. 3 About SecurDPS Enterprise Solution ..................................................................................... 3 Assessment Scope ................................................................................................................ 3 California Consumer Privacy Act (CCPA) ............................................................................... 3 Compliance ........................................................................................................................ 4 Personal Information .......................................................................................................... 4 Privacy Rights Under CCPA ............................................................................................... 4 Protecting Data with SecurDPS .............................................................................................. 5 Integrating Enterprise Applications ......................................................................................... 5 Auditing and Analyzing ........................................................................................................... 6 SecurDPS Architecture Review..............................................................................................
    [Show full text]
  • Pseudonymization in Biomedical Research
    Technische Universität München (TUM) Fakultät für Informatik Institut für medizinische Statistik und Epidemiologie Lehrstuhl für medizinische Informatik Pseudonymization in Biomedical Research Ronald Rene Lautenschläger, M.Sc. Vollständiger Abdruck der von der Fakultät für Informatik der Technischen Universität München zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) genehmigten Dissertation. Vorsitzende(r): Prof. Dr. Florian Matthes Prüfer der Dissertation: 1. Prof. Dr. Klaus A. Kuhn 2. Prof. Dr. Claudia Eckert Die Dissertation wurde am 29.08.2018 bei der Technischen Universität München eingereicht und durch die Fakultät für Informatik am 29.01.2019 angenommen. Acknowledgement Acknowledgement Firstly, I would like to express my sincere gratitude to my advisor Prof. Klaus A. Kuhn for helping me finding my research issue and also for the continuous support of my Ph.D study, for his patience and his guidance. Besides my first advisor, I would like to thank my second advisor Prof. Claudia Eckert for her insightful comments, encouragement and the feedback provided. My sincere thanks also goes to Dr. Fabian Prasser and Dr. Florian Kohlmayer, from whom my studies have greatly benefited. Without their precious support it would not have been possible to conduct this research. The fruitful discussions, meticulous comments and suggestions of Dr. Prasser, Dr. Kohlmayer and Prof. Kuhn were illuminating and crucial for this work. I owe my deepest gratitude to my family and my friends who encouraged me during my research and my whole life. Last but not the least, I would like to thank my co-workers of our institute for their support and friendly assistance in all matters.
    [Show full text]
  • White Paper on Pseudonymization Drafted by the Data Protection
    Schwartmann/Weiß (Eds.) White Paper on Pseudonymization Drafted by the Data Protection Focus Group for the Safety, Protection, and Trust Platform for Society and Businesses in Connection with the 2017 Digital Summit – Guidelines for the legally secure deployment of pseudonymization solutions in compliance with the General Data Protection Regulation – Schwartmann/Weiß (Eds.) White Paper on Pseudonymization Drafted by the Data Protection Focus Group for the Safety, Protection, and Trust Platform for Society and Businesses in Connection with the 2017 Digital Summit – Guidelines for the legally secure deployment of pseudonymization solutions in compliance with the General Data Protection Regulation – Headed by: Prof. Dr. Rolf Schwartmann Members: Walter Ernestus Research Center for Media German Federal Commissio- Law – Cologne University of ner for Data Protection and Applied Sciences Freedom of Information Sherpa: Steffen Weiß, LL.M. Nicolas Goß German Association for Data Association of the Internet Protection and Data Security Industry Members: Prof. Dr. Christoph Bauer Michael Herfert ePrivacy GmbH Fraunhofer Society for the Advancement of Applied Patrick von Braunmühl Research Bundesdruckerei GmbH Serena Holm Susanne Dehmel SCHUFA Holding AG German Association for Infor- mation Technology, Telecom- Dr. Detlef Houdeau munications and New Media Infineon Technologies AG Version 1.0., 2017 Sherpa contact: Issued by the Digital Summit’s Steffen Weiß data protection focus group German Association for Management contact: Data Protection and Data Security Prof. Dr. Rolf Schwartmann Heinrich-Böll-Ring-10 (TH Köln/GDD); 53119 Bonn Germany +49 228 96 96 75 0 [email protected] Gesellschaft für Datenschutz und Datensicherheit e.V. Members: Annette Karstedt-Meierrieks Members: Dr. Sachiko Scheuing DIHK – Association of German Acxiom Deutschland GmbH Chambers of Industry and Commerce Irene Schlünder Technologie- und Methoden- Johannes Landvogt plattform für die vernetzte German Federal Commissio- medizinische Forschung e.V.
    [Show full text]