Data Privacy: De-Identification Techniques

Data Privacy: De-Identification Techniques

DEVELOPING AND CONNECTING ISSA CYBERSECURITY LEADERS GLOBALLY Data Privacy: De-Identification Techniques By Ulf Mattsson – ISSA member, New York Chapter This article discusses emerging data privacy techniques, standards, and examples of applications implementing different use cases of de-identification techniques. We will discuss different attack scenarios and practical balances between privacy requirements and operational requirements. Abstract The data privacy landscape is changing. There is a need for privacy models in the current landscape of the increas- ing numbers of privacy regulations and privacy breaches. Privacy methods use models and it is important to have a common language when defining privacy rules. This article will discuss practical recommendations to find the right practical balance between compli- ance, security, privacy, and operational requirements for each type of data and business use case. Figure 1 – Forrester’s global map of privacy rights and regulations [4] ensitive data can be exposed to internal users, partners, California Customer Privacy Act (CCPA) is a wake-up call, and attackers. Different data protection techniques can addressing identification of individuals via data inference provide a balance between protection and transparen- through a broader range of PII attributes. CCPA defines Scy to business processes. The requirements are different for personal information as information that identifies, relates systems that are operational, analytical, or test/development to, describes, is reasonably capable of being associated with, as illustrated by some practical examples in this article. or could reasonably be linked, directly or indirectly, with a We will discuss different aspects of various data privacy tech- particular consumer or household such as a real name, alias, niques, including data truthfulness, applicability to different postal address, and unique personal identifier [1]. types of attributes, and if the techniques can reduce the risk of Privacy fines data singling out, linking, or inference. We will also illustrate the significant differences in applicability of pseudonymiza- In 2019, Facebook settled with the US Federal Trade Commis- tion, cryptographic tools, suppression, generalization, and sion (FTC) over privacy violations, a settlement that required noise addition. the social network to pay US$5 billion. British Airways was fined £183 million by the UK Information Commissioner’s Many countries have local privacy regulations Office (ICO) for a series of data breaches in 2018, followed by a £99 million fine against the Marriott International hotel Besides the GDPR, many EU countries and other countries chain. French data protection regulator Commission Natio- have local privacy regulations (figure 1). For instance, the 26 – ISSA Journal | May 2020 Data Privacy: De-Identification Techniques | Ulf Mattsson nale de l'informatique et des Libertés (CNIL) fined Google • Direct identifier: an attribute that alone enables €50 million in 2019. unique identification of a data principal within a spe- cific operational context. Privacy and security are not the same • Indirect identifier: an attribute that when consid- Privacy and security and are two sides of a complete data pro- ered in conjunction with other attributes enables tection solution. Privacy has to do with an individual's right to unique identification. Indirect identifiers need to own the data generated by his or her life and activities, and to be removed from the dataset by selectively applying restrict the outward flow of that data [15]. Data privacy, or in- generalization, randomization, suppression, or en- formation privacy, is the aspect of information technology (IT) cryption-based de-identification techniques to the that deals with the ability to share data with third parties [10]. remaining attributes. Many organizations forget to do risk management before • Local identifier: a set of attributes that together single out selecting privacy tools. The NIST “Privacy Framework: A a data principal in the dataset. For example, each record Tool for Improving Privacy through Enterprise Risk Man- may be comprised of a set of attributes. Each record car- agement” [9] can drive better privacy engineering and help ries information about a data principal and each attribute organizations protect individual privacy through enterprise carries a known meaning. risk management. Figure 2 illustrates how NIST considers Unique identifier: an attribute that alone singles out a the overlap and differences between cybersecurity and pri- • data principal in the dataset. vacy risks. • Quasi-identifier: an attribute that when considered in conjunction with other attributes in the dataset sin- gles out a data principal. Figure 3 depicts the relationship between the various types of identifiers described above. Any attribute that is equal to or part of one of the types of identifiers defined above is deemed to be an identifying attribute. In figure 3 Dataset refers to the dataset to which de-identification techniques are to be ap- plied, and context refers to the information derived from the operational context of the dataset. Figure 2 – Overlap and differences between cybersecurity and privacy risks [9] Privacy suffers if you don’t know your data You must start with knowing your data before selecting your tools. Defining your data via data discovery and classification is the first part of a three-part framework that Forrester calls the “Data Security and Control Framework” [11]. This frame- work breaks down data protection into three key areas: 1. Defining the data 2. Dissecting and analyzing the data 3. Defending the data Figure 3 — Types of identifiers Privacy standards De-identification of data ISO standards provide a broad international platform, and frequently the standards are based on mature security and Interest in technical capabilities for anonymizing data ex- privacy standards developed by US ANSI X9 for the financial panded as the GDPR came into force. This is because, with industry. ISO published an international standard to use as truly anonymized data, data protection authorities no longer a reference for selecting PII (personally identifiable informa- consider it personally identifiable information and it falls tion) protection controls within cloud computing systems [7]. outside of scope for the regulation. The classification of attributes reflects the difference between • Anonymization: A method of de-identification that re- singling out a data principal in a dataset and identifying a moves all personally identifiable information from a data- data principal within a specific operational context: set to the extent that makes the possibility of re-identifi- cation of the data negligible. Anonymization requires an • Identifier: a set of attributes in a dataset (collection of data) that enables unique identification of a data principal with- approach that might combine different elements, such as in a specific operational context. technology, oversight, and legal agreements with third parties [12]. May 2020 | ISSA Journal – 27 Data Privacy: De-Identification Techniques | Ulf Mattsson • Pseudonymization: A method of de-identification that re- Pseudonyms independent of identifying attributes places personally identifiable information with false iden- Pseudonym values are typically independent of the replaced tifiers, which facilitates data analysis while maintaining attributes’ original values via generation of random values. strong data protection [12] (figure 4). Data tokens may or may not be derived from original values. Tokens were covered in an earlier article [8]. When pseud- onyms are generated independently of the attributes, a table containing the mappings (or assignments) of the original identifier(s) to the corresponding pseudonym can be created. For security reasons, appropriate technical and organization- al security measures need to be applied to limit and/or control access to such a table, in accordance with the organization’s objectives and re-identification risk assessment [7] (figure 5). Figure 4 – Pseudonymization and Anonymization De-identification techniques Some techniques are performed on the original dataset and others are performed on a copy of the dataset that you may share with users or organizations that should not have access to the original data. We are discussing these aspects below and the consideration that you may lose data granularity when applying some techniques, including data generalization. It can partially reduce the risk of linking and partially reduces the risk of inference if the group identifies a sufficiently large Figure 5 - Examples of de-identification techniques group of individuals. Selection of group sizes needs to consid- er if the result will be skewed if the data is used for statistical Generalization analysis, for example in population census reporting. Generalization techniques reduce the granularity of informa- tion contained in a selected attribute or in a set of related at- Sampling tributes in a dataset. Generalization techniques preserve data Data sampling is a statistical analysis technique that selects truthfulness at the record level. The output of generalization a representative subset of a larger dataset in order to analyze is microdata. and recognize patterns in the original dataset. To reduce the risk of re-identification, sampling is performed on data prin- Rounding cipals [7]. Rounding may involve a rounding base for a selected attri- Sampling provides data

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us