Data Protection and Research Ethics Challenges and Opportunities
Total Page:16
File Type:pdf, Size:1020Kb
Researching Extremist Content on Social Media Platforms: Data Protection and Research Ethics Challenges and Opportunities Manjana Sold, Julian Junk GNET is a special project delivered by the International Centre for the Study of Radicalisation, King’s College London. The authors of this report are Manjana Sold and Julian Junk The Global Network on Extremism and Technology (GNET) is an academic research initiative backed by the Global Internet Forum to Counter Terrorism (GIFCT), an independent but industry‑funded initiative for better understanding, and counteracting, terrorist use of technology. GNET is convened and led by the International Centre for the Study of Radicalisation (ICSR), an academic research centre based within the Department of War Studies at King’s College London. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing those, either expressed or implied, of GIFCT, GNET or ICSR. CONTACT DETAILS For questions, queries and additional copies of this report, please contact: ICSR King’s College London Strand London WC2R 2LS United Kingdom T. +44 20 7848 2098 E. [email protected] Twitter: @GNET_research Like all other GNET publications, this report can be downloaded free of charge from the GNET website at www.gnet‑research.org. © GNET Researching Extremist Content on Social Media Platforms: Data Protection and Research Ethics Challenges and Opportunities Executive Summary he nexus between terrorism and technology is socially and politically more relevant than ever. Almost every mobilisation and Tradicalisation process and every violent attack, whether carried out or prevented, has an online component to it. Researchers, not least those involved in GNET, are meeting head on the challenge of analysing these processes in difficult empirical online environments. Even when taking into account ever changing platforms and policies, as well as the shift towards ever more closed and encrypted spaces, there is abundant empirical data. This abundance creates challenges and opportunities of its own,1 yet there are also hard limits to and grey zones around what a researcher dealing with extremist online content can and is allowed to do, which brings in ethical and data protection considerations. Discussions of such topics gained much relevance in recent years and are particularly lively in international research consortia. While summarising the state of these discussions around ethics and privacy, this GNET report identifies the limits of and opportunities for research and formulates related recommendations for researchers and tech companies. It proceeds in three steps: first, it summarises some of the main ethical considerations that a researcher in this academic field should bear in mind; second, it provides an overview of the main data protection principles that are to be observed and highlights the opportunities for and balancing acts required of researchers in this regard; and third, it discusses the interplay between researchers, data sources and the policies of platforms, condensing in this context some key recommendations for researchers, tech companies and regulators. The most important points are: first, more cross‑platform access points and databases would provide incentives for a broader and more vibrant research field; second, badly needed international research collaboration around analysing extremist online content would benefit from greater international harmonisation and a convergence of data protection rules; third, data protection regimes should not be seen as inconveniences but as enabling research by providing a clearer demarcation of what is and is not possible; and, lastly, the dynamic empirical field requires regular mechanisms of exchange between tech companies, researchers and regulators to adapt policies, habits and legal frameworks in a way that does justice to the social and political relevance of the nexus between extremism and technology. 1 Abdullah Alrhmoun, Shiraz Maher, and Charlie Winter, Decoding Hate: Using Experimental Text Analysis to Classify Terrorist Content, ICSR King’s College London (2020). 1 2 Contents Executive Summary 1 1 Introduction 5 2 Central Ethical Considerations 7 Ethical principles that concern the individual research subject 7 Ethical principles that concern the societal dimension 9 Ethical principles with regard to researchers themselves 9 3 Central Data Protection Principles – Legal Limits, Challenges and Opportunities for Researchers 11 Legal regulations for research with personal data with the consent of the persons concerned’ 11 Legal regulations for research with personal data without the consent of the persons concerned 13 4 Data Source, Platform Policies and Researchers – Overview, Interplay and Recommendations 17 Twitter 17 Facebook 18 Google 19 TikTok 19 Telegram 20 General Recommendations 20 5 Concluding Remarks 23 Policy Landscape 25 3 4 Researching Extremist Content on Social Media Platforms: Data Protection and Research Ethics Challenges and Opportunities 1 Introduction he digital space played a central role in the radicalisation processes of many perpetrators of past attacks:1 extremists Tincluding Anis Amri (Berlin, Germany), Brenton Tarrant (Christchurch, New Zealand) and Stephan Balliet (Halle, Germany) took advantage of social media platforms not only to gather and distribute information, and to network and stage, but also to exchange ideas with like‑minded people and sometimes even to share an attack live for thousands of viewers. It is through this communication by radical or extremist actors that we can learn much about the radicalisation processes that take place in the virtual world. The content and its presentation, as well as the way in which these actors communicate, are of central importance in that regard and can serve as a background against which to develop the most appropriate preventative and demobilising measures. In the context of this research field, data retrieved from social media naturally has become increasingly important.2 This is exemplified by numerous scientific publications based on data from social media: for instance, Facebook,3 Twitter,4 YouTube5 and Instagram.6 An extremely large pool of data can now be accessed and used to develop and test hypotheses.7 These opportunities go hand in hand with limitations and pitfalls. This relates to potential ethical and data protection requirements, which certainly provide challenges for researchers but also many opportunities. While transparency and the guideline “maximising benefits and minimising harm” are essential throughout the entire research process, there are further principles and guidelines that need to be considered. In the first two sections, we summarise some key ethical considerations that a research process in this academic field should include and we provide insights 1 We are immensely grateful for the comments from Sebastian Golla on earlier versions of this report and his skilful legal guidance through so many of our research endeavours in the last years. Thanks go as well to Clara‑Auguste Süß for her comments and Leo Bauer and Klara Sinha for their support in finalising this report. 2 Sebastian J. Golla, Henning Hofmann, and Matthias Bäcker, “Connecting the Dots: Sozialwissenschaftliche Forschung in Sozialen Online‑Medien im Lichte von DS‑GVO und BDSG‑neu,” Datenschutz und Datensicherheit – DuD 42, no. 2 (2018): 89, http://link.springer.com/10.1007/s11623-018-0900-x; Manjana Sold, Hande Abay Gaspar, and Julian Junk, Designing Research on Radicalisation using Social Media Content: Data Protection Regulations as Challenges and Opportunities, 2020. 3 Agata Błachnio, Aneta Przepio´rka, and Patrycja Rudnicka, “Psychological Determinants of Using Facebook: A Research Review,” International Journal of Human-Computer Interaction 29 (2013), https://doi.org/10.1 080/10447318.2013.780868; Ralf Caers et al., “Facebook: A Literature Review,” New Media & Society 15 (2013), https://doi.org/10.1177/1461444813488061; Stefania Manca and Maria Ranieri, “Is It a Tool Suitable for Learning? A Critical Review of the Literature on Facebook as a Technology Enhanced Learning Environment,” Journal of Computer Assisted Learning 29 (2013), https://doi.org/10.1111/jcal.12007; Ashwini Nadkarni and Stefan G. Hofmann, “Why do People Use Facebook?,” Personality and Individual Differences 52, no. 3 (2012), https://doi.org/10.1016/j.paid.2011.11.007; Robert E. Wilson, Samuel D. Gosling, and Lindsay T. Graham, “A Review of Facebook Research in the Social Sciences,” Perspectives on Psychological Science 7 (2012), https://doi.org/10.1177/1745691612442904. 4 Jytte Klausen, “Tweeting the Jihad: Social Media Networks of Western Foreign Fighters in Syria and Iraq,” Studies in Conflict & Terrorism 38, no. 1 (2015); Amandeep Dhir, Khalid Buragga, and Abeer A. Boreqqah, “Tweeters on Campus: Twitter a Learning Tool in Classroom?,” Journal of Universal Computer Science 19 (2013); Shirley Ann Williams, Melissa Terras, and Claire Warwick, “What Do People Study When They Study Twitter? Classifying Twitter Related Academic Papers,” Journal of Documentation 69 (2013). 5 Chareen Snelson, “YouTube Across the Disciplines: A Review of the Literature,” MERLOT Journal of Online Learning and Teaching Journal of Qualitative Methods 7, no. 14 (2011), http://jolt.merlot.org/vol7no1/ snelson_0311.htm; Raphael Ottoni et al., “Analyzing Right-wing YouTube Channels: Hate, Violence and Discrimination,” (2018); Kostantinos Papadamou et al., “Understanding