Data Privacy: De-Identification Techniques

Total Page:16

File Type:pdf, Size:1020Kb

Data Privacy: De-Identification Techniques DEVELOPING AND CONNECTING ISSA CYBERSECURITY LEADERS GLOBALLY Data Privacy: De-Identification Techniques By Ulf Mattsson – ISSA member, New York Chapter This article discusses emerging data privacy techniques, standards, and examples of applications implementing different use cases of de-identification techniques. We will discuss different attack scenarios and practical balances between privacy requirements and operational requirements. Abstract The data privacy landscape is changing. There is a need for privacy models in the current landscape of the increas- ing numbers of privacy regulations and privacy breaches. Privacy methods use models and it is important to have a common language when defining privacy rules. This article will discuss practical recommendations to find the right practical balance between compli- ance, security, privacy, and operational requirements for each type of data and business use case. Figure 1 – Forrester’s global map of privacy rights and regulations [4] ensitive data can be exposed to internal users, partners, California Customer Privacy Act (CCPA) is a wake-up call, and attackers. Different data protection techniques can addressing identification of individuals via data inference provide a balance between protection and transparen- through a broader range of PII attributes. CCPA defines Scy to business processes. The requirements are different for personal information as information that identifies, relates systems that are operational, analytical, or test/development to, describes, is reasonably capable of being associated with, as illustrated by some practical examples in this article. or could reasonably be linked, directly or indirectly, with a We will discuss different aspects of various data privacy tech- particular consumer or household such as a real name, alias, niques, including data truthfulness, applicability to different postal address, and unique personal identifier [1]. types of attributes, and if the techniques can reduce the risk of Privacy fines data singling out, linking, or inference. We will also illustrate the significant differences in applicability of pseudonymiza- In 2019, Facebook settled with the US Federal Trade Commis- tion, cryptographic tools, suppression, generalization, and sion (FTC) over privacy violations, a settlement that required noise addition. the social network to pay US$5 billion. British Airways was fined £183 million by the UK Information Commissioner’s Many countries have local privacy regulations Office (ICO) for a series of data breaches in 2018, followed by a £99 million fine against the Marriott International hotel Besides the GDPR, many EU countries and other countries chain. French data protection regulator Commission Natio- have local privacy regulations (figure 1). For instance, the 26 – ISSA Journal | May 2020 Data Privacy: De-Identification Techniques | Ulf Mattsson nale de l'informatique et des Libertés (CNIL) fined Google • Direct identifier: an attribute that alone enables €50 million in 2019. unique identification of a data principal within a spe- cific operational context. Privacy and security are not the same • Indirect identifier: an attribute that when consid- Privacy and security and are two sides of a complete data pro- ered in conjunction with other attributes enables tection solution. Privacy has to do with an individual's right to unique identification. Indirect identifiers need to own the data generated by his or her life and activities, and to be removed from the dataset by selectively applying restrict the outward flow of that data [15]. Data privacy, or in- generalization, randomization, suppression, or en- formation privacy, is the aspect of information technology (IT) cryption-based de-identification techniques to the that deals with the ability to share data with third parties [10]. remaining attributes. Many organizations forget to do risk management before • Local identifier: a set of attributes that together single out selecting privacy tools. The NIST “Privacy Framework: A a data principal in the dataset. For example, each record Tool for Improving Privacy through Enterprise Risk Man- may be comprised of a set of attributes. Each record car- agement” [9] can drive better privacy engineering and help ries information about a data principal and each attribute organizations protect individual privacy through enterprise carries a known meaning. risk management. Figure 2 illustrates how NIST considers Unique identifier: an attribute that alone singles out a the overlap and differences between cybersecurity and pri- • data principal in the dataset. vacy risks. • Quasi-identifier: an attribute that when considered in conjunction with other attributes in the dataset sin- gles out a data principal. Figure 3 depicts the relationship between the various types of identifiers described above. Any attribute that is equal to or part of one of the types of identifiers defined above is deemed to be an identifying attribute. In figure 3 Dataset refers to the dataset to which de-identification techniques are to be ap- plied, and context refers to the information derived from the operational context of the dataset. Figure 2 – Overlap and differences between cybersecurity and privacy risks [9] Privacy suffers if you don’t know your data You must start with knowing your data before selecting your tools. Defining your data via data discovery and classification is the first part of a three-part framework that Forrester calls the “Data Security and Control Framework” [11]. This frame- work breaks down data protection into three key areas: 1. Defining the data 2. Dissecting and analyzing the data 3. Defending the data Figure 3 — Types of identifiers Privacy standards De-identification of data ISO standards provide a broad international platform, and frequently the standards are based on mature security and Interest in technical capabilities for anonymizing data ex- privacy standards developed by US ANSI X9 for the financial panded as the GDPR came into force. This is because, with industry. ISO published an international standard to use as truly anonymized data, data protection authorities no longer a reference for selecting PII (personally identifiable informa- consider it personally identifiable information and it falls tion) protection controls within cloud computing systems [7]. outside of scope for the regulation. The classification of attributes reflects the difference between • Anonymization: A method of de-identification that re- singling out a data principal in a dataset and identifying a moves all personally identifiable information from a data- data principal within a specific operational context: set to the extent that makes the possibility of re-identifi- cation of the data negligible. Anonymization requires an • Identifier: a set of attributes in a dataset (collection of data) that enables unique identification of a data principal with- approach that might combine different elements, such as in a specific operational context. technology, oversight, and legal agreements with third parties [12]. May 2020 | ISSA Journal – 27 Data Privacy: De-Identification Techniques | Ulf Mattsson • Pseudonymization: A method of de-identification that re- Pseudonyms independent of identifying attributes places personally identifiable information with false iden- Pseudonym values are typically independent of the replaced tifiers, which facilitates data analysis while maintaining attributes’ original values via generation of random values. strong data protection [12] (figure 4). Data tokens may or may not be derived from original values. Tokens were covered in an earlier article [8]. When pseud- onyms are generated independently of the attributes, a table containing the mappings (or assignments) of the original identifier(s) to the corresponding pseudonym can be created. For security reasons, appropriate technical and organization- al security measures need to be applied to limit and/or control access to such a table, in accordance with the organization’s objectives and re-identification risk assessment [7] (figure 5). Figure 4 – Pseudonymization and Anonymization De-identification techniques Some techniques are performed on the original dataset and others are performed on a copy of the dataset that you may share with users or organizations that should not have access to the original data. We are discussing these aspects below and the consideration that you may lose data granularity when applying some techniques, including data generalization. It can partially reduce the risk of linking and partially reduces the risk of inference if the group identifies a sufficiently large Figure 5 - Examples of de-identification techniques group of individuals. Selection of group sizes needs to consid- er if the result will be skewed if the data is used for statistical Generalization analysis, for example in population census reporting. Generalization techniques reduce the granularity of informa- tion contained in a selected attribute or in a set of related at- Sampling tributes in a dataset. Generalization techniques preserve data Data sampling is a statistical analysis technique that selects truthfulness at the record level. The output of generalization a representative subset of a larger dataset in order to analyze is microdata. and recognize patterns in the original dataset. To reduce the risk of re-identification, sampling is performed on data prin- Rounding cipals [7]. Rounding may involve a rounding base for a selected attri- Sampling provides data
Recommended publications
  • Data Protection
    Handbook on the Techniques of Judicial Interactions in the Application of the EU Charter DATA PROTECTION IN THE FRAMEWORK OF THE PROJECT ‘E-LEARNING NATIONAL ACTIVE CHARTER TRAINING (E-NACT)’ FUNDED BY THE EUROPEAN COMMISSION FUNDAMENTAL RIGHTS & CITIZENSHIP PROGRAMME Researcher responsible for the Handbook: Dr Mariavittoria Catanzariti 1 NATIONAL EXPERTS AND COLLABORATORS The e-NACT team would like to thank the following experts and collaborators who contributed to the selection of the national and European case law upon which this Handbook is built. Federica Casarosa Madalina Moraru Karolina Podstawa Joan Soares Mullor Sara Azevedo Afonso Brás Sergiu Popovici Rita de Brito Gião Hanek Diana Lavinia Botău Francesco Perrone Florentino Gregorio Ruiz Yamuza 2 Contents Part I - Data protection and privacy as EU fundamental rights ......................................... 8 Setting the scene ..................................................................................................................... 8 From the Directive 95/46/EC to the GDPR.......................................................................... 11 The European culture of data protection .............................................................................. 12 The Data Protection Reform of 2016: the GDPR and the Law Enforcement Directive ...... 13 Main principles of data processing ....................................................................................... 14 The basics: what’s personal data? ..............................................................................................
    [Show full text]
  • From Security Monitoring to Cyber Risk Monitoring
    Issue 19 | 2016 Complimentary article reprint From security monitoring to cyber risk monitoring Enabling business-aligned cybersecurity By Adnan Amjad, Mark Nicholson, Christopher Stevenson, and Andrew Douglas About Deloitte Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee (“DTTL”), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as “Deloitte Global”) does not provide services to clients. Please see www.deloitte.com/about for a more detailed description of DTTL and its member firms. Deloitte provides audit, tax, consulting, and financial advisory services to public and private clients spanning multiple industries. With a globally connected network of member firms in more than 150 countries and territories, Deloitte brings world-class capabilities and high-quality service to clients, delivering the insights they need to address their most complex business challenges. Deloitte’s more than 200,000 professionals are committed to becoming the standard of excellence. This communication contains general information only, and none of Deloitte Touche Tohmatsu Limited, its member firms, or their related entities (collectively, the “Deloitte Network”) is, by means of this communication, rendering professional advice or services. No entity in the Deloitte net- work shall be responsible for any loss whatsoever sustained by any person who relies on this communication. © 2016. For information, contact Deloitte Touche Tohmatsu Limited. 122 www.deloittereview.com From security monitoring to cyber risk monitoring 123 CYBER RISK MANAGEMENT From security monitoring to cyber risk monitoring Enabling business-aligned cybersecurity By Adnan Amjad, Mark Nicholson, Christopher Stevenson, and Andrew Douglas Illustration by Lucy Rose Why didn’t we detect it? That’s the all-too-common question when a major cyber incident is discovered—or, too often, announced.
    [Show full text]
  • Leveraging GDPR to Become a Trusted Data Steward
    The Boston Consulting Group (BCG) is a global management consulting firm and the world’s leading advisor on business strategy. We partner with clients from the private, public, and not-for- profit sectors in all regions to identify their highest-value opportunities, address their most critical challenges, and transform their enterprises. Our customized approach combines deep insight into the dynamics of companies and markets with close collaboration at all levels of the client organization. This ensures that our clients achieve sustainable competitive advantage, build more capable organizations, and secure lasting results. Founded in 1963, BCG is a private company with offices in more than 90 cities in 50 countries. For more information, please visit bcg.com. DLA Piper is a global law firm with lawyers located in more than 40 countries throughout the Ameri- cas, Europe, the Middle East, Africa and Asia Pa- cific, positioning us to help clients with their legal needs around the world. We strive to be the leading global business law firm by delivering quality and value to our clients. We achieve this through practical and innovative legal solutions that help our clients succeed. We deliver consistent services across our platform of practices and sectors in all matters we undertake. Our clients range from multinational, Global 1000, and Fortune 500 enterprises to emerging compa- nies developing industry-leading technologies. They include more than half of the Fortune 250 and nearly half of the FTSE 350 or their subsidi- aries. We also advise governments and public sector bodies. Leveraging GDPR to Become a Trusted Data Steward Patrick Van Eecke, Ross McKean, Denise Lebeau-Marianna, Jeanne Dauzier: DLA Piper Elias Baltassis, John Rose, Antoine Gourevitch, Alexander Lawrence: BCG March 2018 AT A GLANCE The European Union’s new General Data Protection Regulation, which aims to streng- then protections for consumers’ data privacy, creates an opportunity for companies to establish themselves as trusted stewards of consumer data.
    [Show full text]
  • Security and Privacy Issues of Big Data
    Security and Privacy Issues of Big Data José Moura1,2, Carlos Serrão1 1 ISCTE-IUL, Instituto Universitário de Lisboa, Portugal 2 IT, Instituto de Telecomunicações, Lisboa, Portugal {jose.moura, carlos.serrao}@iscte.pt ABSTRACT This chapter revises the most important aspects in how computing infrastructures should be configured and intelligently managed to fulfill the most notably security aspects required by Big Data applications. One of them is privacy. It is a pertinent aspect to be addressed because users share more and more personal data and content through their devices and computers to social networks and public clouds. So, a secure framework to social networks is a very hot topic research. This last topic is addressed in one of the two sections of the current chapter with case studies. In addition, the traditional mechanisms to support security such as firewalls and demilitarized zones are not suitable to be applied in computing systems to support Big Data. SDN is an emergent management solution that could become a convenient mechanism to implement security in Big Data systems, as we show through a second case study at the end of the chapter. This also discusses current relevant work and identifies open issues. Keywords: Big Data, Security, Privacy, Data Ownership, Cloud, Social Applications, Intrusion Detection, Intrusion Prevention. INTRODUCTION The Big Data is an emerging area applied to manage datasets whose size is beyond the ability of commonly used software tools to capture, manage, and timely analyze that amount of data. The quantity of data to be analyzed is expected to double every two years (IDC, 2012).
    [Show full text]
  • Why Pseudonyms Don't Anonymize
    Why Pseudonyms Don’t Anonymize: A Computational Re-identification Analysis of Genomic Data Privacy Protection Systems Bradley Malin Data Privacy Laboratory, School of Computer Science Carnegie Mellon University, Pittsburgh, Pennsylvania 15213-3890 USA [email protected] Abstract Objectives: In order to supply patient-derived genomic data for medical research purposes, care must be taken to protect the identities of the patients from unwanted intrusions. Recently, several protection techniques that employ the use of trusted third parties (TTPs), and other identity protecting schemas, have been proposed and deployed. The goal of this paper is to analyze the susceptibility of genomic data privacy protection methods based on such methods to known re-identification attacks. Methods: Though advocates of data de-identification and pseudonymization have the best of intentions, oftentimes, these methods fail to preserve anonymity. Methods that rely solely TTPs do not guard against various re-identification methods, which can link apparently anonymous, or pseudonymous, genomic data to named individuals. The re-identification attacks we analyze exploit various types of information that leaked by genomic data that can be used to infer identifying features. These attacks include genotype-phenotype inference attacks, trail attacks which exploit location visit patterns of patients in a distributed environment, and family structure based attacks. Results: While susceptibility varies, we find that each of the protection methods studied is deficient in their protection against re-identification. In certain instances the protection schema itself, such as singly-encrypted pseudonymization, can be leveraged to compromise privacy even further than simple de-identification permits. In order to facilitate the future development of privacy protection methods, we provide a susceptibility comparison of the methods.
    [Show full text]
  • Data Privacy and Security
    Policy Brief August 2016 Policy analysis from SREB’s 10 Issues in Educational Technology Data Privacy and Security Educational technology systems generate enormous amounts of data, which allow schools and colleges to provide more personalized services to every student. Creating a culture of efficient data use for students is critically important, but equally impor- tant is securing the data collected from these systems. Data privacy and security cannot be a “behind the scenes” approach for education agencies; risk assessment and mitigating practices should be common knowledge and inherent in the culture of effective data use. Although data privacy and security go hand in hand, they are two different concepts. Data security involves the technical and physical requirements that protect against unauthorized entry into a data system and helps maintain the integrity of data. Data privacy is about data confidentiality and the rights of the individual whom the data involve, how the data are used and with whom data can legally be shared. Without clear policy, the concerns of the stakeholders cannot be fairly balanced in the best interests of students, their achievement, and effective and efficient educational decision-making. Data Value and Use Inside Educators have long relied on research and data to identify effective teaching Data Value and Use 1 and learning strategies. As innovative Privacy and Security Risks 2 instructional models have emerged, Legislative Actions 3 tailored instruction has the potential Data Governance 4 to improve and accelerate student Recommended Policies 4 learning, as well as focus on individual Transparency 5 needs and skills. This personalized Role-based Usage Permissions 5 learning approach is possible because Monitoring and Breach Notification 5 of technological advancements that Training 6 permit data to be gathered from various Technical Support 6 systems and analyzed.
    [Show full text]
  • Planning for Cyber Security in Schools: the Human Factor
    PLANNING FOR CYBER SECURITY IN SCHOOLS: THE HUMAN FACTOR MICHAEL D. RICHARDSON Columbus State University, U.S.A. PAMELA A. LEMOINE Troy University, U.S.A. WALTER E. STEPHENS Houston County Schools, Georgia, U.S.A. ROBERT E. WALLER Columbus State University, U.S.A. ABSTRACT Cybersecurity has emerged as one of the most critical issues confronting schools in the 21st century. Computer security is an essential instrument for protecting children, but K-12 schools are considered one of the most attractive targets for data privacy crimes often due to the less-than- effective cybersecurity practices in schools. The human factor is the underlying reason why many attacks on school computers and systems are successful because the uneducated computer user is the weakest link targeted by cyber criminals using social engineering. Formal cyber security awareness is required to mitigate the exploitation of human vulnerabilities by computer hackers and attackers. INTRODUCTION Much of the world is now in cyber space and cyber security has become a massive issue with many facets of schools (Arlitsch & Edelman, 2014). Cybersecurity has brought about research, discussion, papers, tools for monitoring, tools for management, etc., with much of the focus from the schools’ side concerning the protection of their data and information (Seemma, Nandhini, & Sowmiya, 2018). As a result of increased dependence on the Internet, cybersecurity has emerged as one of the critical issues confronting schools in the 21st century (Gioe, Goodman, & Wanless, 2019). The reliance on a complex technology infrastructure has come with a price: by accepting the Internet so widely, schools have exposed themselves to a range of nefarious cyber activities by a spectrum of offenders looking for school data and information (Shen, Chen, & Su, 2017).
    [Show full text]
  • What to Know About the General Data Protection Regulation (GDPR)? FAQ
    What to know about the General Data Protection Regulation (GDPR)? FAQ www.bitkom.org What to know about the General Data Protection Regulation (GDPR)? 2 Imprint Publisher Bitkom e. V. Federal Association for Information Technology, Telecommunications and New Media Albrechtstraße 10 | 10117 Berlin Contact Sausanne Dehmel | Member of the Executive Board responsible for security and trust T +49 30 27576 -223 | [email protected] Graphics & Layout Sabrina Flemming | Bitkom Coverimage © artjazz – fotolia.com Copyright Bitkom 2016 This publication constitutes general, non-binding information. The contents represent the view of Bitkom at the time of publication. While great care is taken in preparing this information, no guarantee can be provided as to its accuracy, completeness, and/or currency. In particular, this publication does not take into consideration the specific circumstances of individual cases. The reader is therefore personally responsible for its use. Any liability is excluded. What to know about the General Data Protection Regulation (GDPR)? 3 Table of Contents Table of Contents Introduction ________________________________________________________________ 4 When does the GDPR enter into force? __________________________________________ 5 How does the GDPR apply in Germany? _________________________________________ 5 Which rules remain largely unchanged? _________________________________________ 7 Which are the new elements and most important changes? ________________________ 9 Which processes and documents do I have to review? ____________________________ 17 What are the costs and effort I should expect? ___________________________________ 18 Which company departments should be informed about the changes? ______________ 18 Who provides guidance on interpretation? ______________________________________ 19 What to know about the General Data Protection Regulation (GDPR)? 4 FAQ Introduction The extensive provisions of the General Data Protection Regulation (GDPR) cause small and medium-sized enterprises (SMEs) initial difficulties.
    [Show full text]
  • Schrems II Compliant Supplementary Measures
    Schrems II Compliant Supplementary Measures 11 November 2020 © 2020 Anonos www.anonos.com 1 Summary The attached documents serve as evidence of the feasibility and practicability of the proposed measures enumerated in the EDPB’s Recommendations 01/2020 on Measures That Supplement Transfer Tools to Ensure Compliance With the EU Level of Protection of Personal Data; the first document, prepared by Anonos, describes several use cases, and the others include independent audits, reviews and certifications of Anonos state-of-the-art technology that leverages European Union Agency for Cybersecurity (ENISA) recommendations for GDPR compliant Pseudonymisation to enable EDPB proposed measures. Table of Contents 1. Maximizing Data Liquidity - Reconciling Data Utility & Protection 3 2. IDC Report - Embedding Privacy and Trust Into Data Analytics Through 20 Pseudonymisation 3. Data Scientist Expert Opinion on Variant Twins and Machine Learning 29 4. Data Scientist Expert Opinion on BigPrivacy 38 Anonos dynamic de-identification, pseudonymization and anonymization systems, methods and devices are protected by an intellectual property portfolio that includes, but is not limited to: Patent Numbers: CA 2,975,441 (2020); EU 3,063,691 (2020); US 10,572,684 (2020); CA 2,929,269 (2019); US 10,043,035 (2018); US 9,619,669 (2017); US 9,361,481 (2016); US 9,129,133 (2015); US 9,087,216 (2015); and US 9,087,215 (2015); including 70 domestic and international patents filed. Anonos, BigPrivacy, Dynamic De-Identifier, and Variant Twin are trademarks of Anonos Inc. protected by federal and international statutes and treaties. © 2020 Anonos Inc. All Rights Reserved. MAXIMIZING DATA LIQUIDITY RECONCILING DATA UTILITY & PROTECTION August 2020 www.anonos.com © 2020 Anonos 3 MAXIMIZING DATA LIQUIDITY RECONCILING DATA UTILITY & PROTECTION EXECUTIVE SUMMARY Eight years of research and development have created a solution that optimizes both data protection and data use to maximize data liquidity lawfully & ethically.
    [Show full text]
  • Cyber-Stalking Technology Inventory
    ACKNOWLEDGEMENTS We are indebted to many people for providing insights and other contributions towards this project. While the opinions expressed in these reports are ours alone, they are informed by years of collaborating with and learning from survivors, colleagues and communities. We remain guided by the women, youth, and children experiencing violence at the hands of perpetrators. They often face high risks to their lives and wellbeing. We deeply appreciate all we continue to learn from these strong, brave, and creative individuals. Only by listening to the experiences and opinions of these women and youth, can Canada accurately identify and implement practices that best protect their safety, privacy, safety autonomy, equal rights and access to justice. Thank you to the many anti-violence workers throughout Canada who contributed by sharing their experiences and knowledge for this report and for participating in our two national surveys of anti-violence workers on Technology Abuse and on Organizational Technology Practices. These individuals work in non-profit women’s shelters, transition and interval houses, rape crisis centres, victim services agencies, counseling services, children exposed to violence programs, crisis lines, K-12 classrooms, college campuses, courts, income assistance offices, police stations, hospitals, mobile clinic vans, and many other places where a survivor of violence might request help. During this recession, many anti-violence programs face staff cuts, limited agency resources, and low salaries. These workers often volunteer time as crises arise and still took the time to fill out our surveys. We gratefully acknowledge the Office of the Privacy Commissioner of Canada for providing funding for our research and this report and for their leadership.
    [Show full text]
  • Sypse: Privacy-First Data Management Through Pseudonymization and Partitioning
    Sypse: Privacy-first Data Management through Pseudonymization and Partitioning Amol Deshpande, [email protected] University of Maryland, College Park, MD, USA ABSTRACT there has been much work in recent years on techniques like dif- Data privacy and ethical/responsible use of personal information ferential privacy and encrypted databases, those have not found are becoming increasingly important, in part because of new reg- wide adoption and further, their primary target is untrusted en- ulations like GDPR, CCPA, etc., and in part because of growing vironments or users. We believe that operational databases and public distrust in companies’ handling of personal data. However, data warehouses in trusted environments, which are much more operationalizing privacy-by-design principles is difficult, especially prevalent, also need to be redesigned and re-architected from the given that current data management systems are designed primarily ground up to embed privacy-by-design principles and to enforce to make it easier and efficient to store, process, access, and share vast better data hygiene. It is also necessary that any such redesign not amounts of data. In this paper, we present a vision for transpar- cause a major disruption to the day-to-day operations of software ently rearchitecting database systems by combining pseudonymiza- engineers or analysts, otherwise it is unlikely to be widely adopted. tion, synthetic data, and data partitioning to achieve three privacy In this paper, we present such a redesign of a standard relational goals: (1) reduce the impact of breaches by separating detailed per- database system, called Sypse, that uses “pseudonymization”, data sonal information from personally identifying information (PII) partitioning, and synthetic data, to achieve several key privacy and scrambling it, (2) make it easy to comply with a deletion re- goals without undue burden on the users.
    [Show full text]
  • The Privacy, Data Protection and Cybersecurity Law Review
    The Privacy, Data Protection and Cybersecurity Law Review Editor Alan Charles Raul Law Business Research The Privacy, Data Protection and Cybersecurity Law Review The Privacy, Data Protection and Cybersecurity Law Review Reproduced with permission from Law Business Research Ltd. This article was first published in The Privacy, Data Protection and Cybersecurity Law Review - Edition 1 (published in November 2014 – editor Alan Charles Raul). For further information please email [email protected] The Privacy, Data Protection and Cybersecurity Law Review Editor Alan Charles Raul Law Business Research Ltd THE LAW REVIEWS THE MERGERS AND ACQUISITIONS REVIEW THE RESTRUCTURING REVIEW THE PRIVATE COMPETITION ENFORCEMENT REVIEW THE DISPUTE RESOLUTION REVIEW THE EMPLOYMENT LAW REVIEW THE PUBLIC COMPETITION ENFORCEMENT REVIEW THE BANKING REGULATION REVIEW THE INTERNATIONAL ARBITRATION REVIEW THE MERGER CONTROL REVIEW THE TECHNOLOGY, MEDIA AND TELECOMMUNICATIONS REVIEW THE INWARD INVESTMENT AND INTERNATIONAL TAXATION REVIEW THE CORPORATE GOVERNANCE REVIEW THE CORPORATE IMMIGRATION REVIEW THE INTERNATIONAL INVESTIGATIONS REVIEW THE PROJECTS AND CONSTRUCTION REVIEW THE INTERNATIONAL CAPITAL MARKETS REVIEW THE REAL ESTATE LAW REVIEW THE PRIVATE EQUITY REVIEW THE ENERGY REGULATION AND MARKETS REVIEW THE INTELLECTUAL PROPERTY REVIEW THE ASSET MANAGEMENT REVIEW THE PRIVATE WEALTH AND PRIVATE CLIENT REVIEW THE MINING LAW REVIEW THE EXECUTIVE REMUNERATION REVIEW THE ANTI-BRIBERY AND ANTI-CORRUPTION REVIEW THE CARTELS AND LENIENCY REVIEW THE TAX DISPUTES
    [Show full text]