Counter-Radicalization Bot Research: Using Social Bots to Fight Violent

Total Page:16

File Type:pdf, Size:1020Kb

Counter-Radicalization Bot Research: Using Social Bots to Fight Violent C O R P O R A T I O N WILLIAM MARCELLINO, MADELINE MAGNUSON, ANNE STICKELLS, BENJAMIN BOUDREAUX, TODD C. HELMUS, EDWARD GEIST, ZEV WINKELMAN Counter-Radicalization Bot Research Using Social Bots to Fight Violent Extremism For more information on this publication, visit www.rand.org/t/RR2705 Library of Congress Cataloging-in-Publication Data is available for this publication. ISBN: 978-1-9774-0394-0 Published by the RAND Corporation, Santa Monica, Calif. © Copyright 2020 RAND Corporation R® is a registered trademark. Cover: monsitj/Adobe Stock. Limited Print and Electronic Distribution Rights This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited. Permission is given to duplicate this document for personal use only, as long as it is unaltered and complete. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial use. For information on reprint and linking permissions, please visit www.rand.org/pubs/permissions. The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous. RAND is nonprofit, nonpartisan, and committed to the public interest. RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors. Support RAND Make a tax-deductible charitable contribution at www.rand.org/giving/contribute www.rand.org Preface Given the success violent extremist groups have had online— recruiting, funding, and messaging—the U.S. government (USG) has an interest in effective, agile, and scalable online responses. This report examines the applicability of automated social media (SM) accounts, known as bots, to address this problem. While this report was primarily directed at countering groups like the Islamic State of Iraq and the Levant (ISIL), the findings are also applicable to the growing threat of adver- sary state-sponsored SM information operations. Readers will find an overview of bot technology, a discussion of legal and ethical consider- ations around bot deployment, a framework for assessing risk/reward in bot operations, and recommendations for the USG in developing and deploying such bot programs. The research reported here was completed in August 2018 and underwent security review with the sponsor and the Defense Office of Prepublication and Security Review before public release. This research was sponsored by the U.S. Department of State and the Combating Terrorism Technical Support Office and conducted within the International Security and Defense Policy Center of the RAND National Security Research Division (NSRD), which operates the National Defense Research Institute (NDRI), a federally funded research and development center sponsored by the Office of the Secre- tary of Defense, the Joint Staff, the Unified Combatant Commands, the Navy, the Marine Corps, the defense agencies, and the defense intelligence enterprise. iii iv Counter-Radicalization Bot Research For more information on the RAND International Security and Defense Policy Center, see www.rand.org/nsrd/ndri/centers/isdp or contact the director (contact information is provided on the webpage). Contents Preface ............................................................................. iii Figures and Tables ...............................................................vii Summary .......................................................................... ix Acknowledgments .............................................................. xix Abbreviations .................................................................... xxi CHAPTER ONE Social Chatbots: An Introduction ............................................. 1 An Introduction to Bots ........................................................... 3 Bot Technology Review ........................................................... 4 Implementing Bots on SM Platforms ............................................ 5 Bot Types ...........................................................................10 Continuing Challenges ...........................................................19 Bot Overview Summary ........................................................ 24 CHAPTER TWO Current Status of Bot Technology ........................................... 27 Case Studies ...................................................................... 27 Maturity Model .................................................................. 50 Conclusions ....................................................................... 54 CHAPTER THREE Potential Legal and Ethical Risks ............................................57 Legal Considerations .............................................................58 Ethical Considerations ...........................................................67 v vi Counter-Radicalization Bot Research CHAPTER FOUR Concepts of Operation and Assessment .....................................73 Assessment Criteria and Levels ..................................................74 Bot Variables .......................................................................75 Categories of Social Bot Operations ............................................91 An Example CONOP Assessment ............................................ 107 CHAPTER FIVE Recommendations ............................................................. 109 Development and Deployment ................................................ 111 Legal and Ethical Issues ........................................................ 114 Conclusions ...................................................................... 117 APPENDIX A. Technology Review: Methods and Goals .............................. 119 References ....................................................................... 121 Figures and Tables Figures 3.1. Transparency Spectrum .............................................70 Tables S.1. Bot Terminology ...................................................... x S.2. Bot Types ............................................................. xi S.3. Concepts of Action: Influence and Inform ...................... xiv S.4. Concepts of Action: Degrade/Disrupt Violent Extremist Networks ............................................................. xiv S.5. Concepts of Action: Collect Intelligence ..........................xv 1.1. Bot Types .............................................................11 1.2. Four Requirements for Adaptive Tactical Countermessaging Bots .............................................21 2.1. Sensing Capabilities .................................................51 2.2. Deciding Capabilities ...............................................51 2.3. Action Capabilities ..................................................53 3.1. Terms of Service for Select Social Media Platforms or Messaging Services ...............................................69 4.1. Assessment Levels ....................................................74 4.2. Variable Assessment: Deployer .....................................76 4.3. Variable Assessment: Audience .....................................78 4.4. Variable Assessment: Platform......................................81 4.5. Variable Assessment: Communication Strategy ..................81 4.6. Variable Assessment: Activation ....................................83 4.7. Variable Assessment: Automation ..................................85 4.8. Variable Assessment: Reliance on Artificial Intelligence and Deep Learning ................................................. 86 vii viii Counter-Radicalization Bot Research 4.9. Variable Assessment: Data Retention ............................. 87 4.10. Variable Assessment: Volume ...................................... 88 4.11. Variable Assessment: “Human” Disguise .........................89 4.12. Variable Assessment: Attribution ................................. 90 4.13. Concepts of Action: Influence and Inform ...................... 92 4.14. Concepts of Action: Degrade/Disrupt Violent Extremist Networks ............................................................. 98 4.15. Concepts of Action: Collect Intelligence ........................ 104 4.16. Bot CONOP: Matchmaker ...................................... 108 Summary The speed and diffusion of online recruitment for violent extremist organizations (VEOs) such as the Islamic State of Iraq and the Levant (ISIL) have challenged existing efforts to effectively intervene and engage in counter-radicalization in the digital space. This problem contributes to global instability and violence. Groups like ISIL identify susceptible individuals through open social media (SM) dialogue and eventually seek private conversations online and offline for recruiting. This shift from open and discoverable online dialogue to private and discreet recruitment can happen quickly and offers a short window for intervention before the conversation and the targeted individuals disappear. The counter-radicalization messaging enterprise of the U.S. gov- ernment (USG) may benefit from a sophisticated capability to rap- idly detect targets of VEO recruitment efforts and deliver counter- radicalization content to them. Our report examines the applicability of promising emerging technology tools, particularly automated SM accounts known as bots, to this problem. While this
Recommended publications
  • Chomsky and Pollin: We Can't Rely on Private Sector for Necessary
    So-Called Democratic “Moderates” Are Actually Right-Wingers Who Have Always Thrown Up Roadblocks To Social Progress CJ Polychroniou The U.S. is the only liberal-democratic country in the world with a political system set up for two mainstream parties, a long and continuous history of union suppression, and without a major socialist party at the national level. How is it possible that the world’s largest economy has a crumbling infrastructure (“shabby beyond belief” is how the CEO of Legal & General, a multinational financial services and asset management company, described it back in 2016), and ranks in the lower half of second tier countries, behind economic powerhouses Cyprus and Greece, on the 2020 Social Progress Index? It’s the politics, stupid! The United States is the only liberal-democratic country in the world with a political system set up for two mainstream parties, a long and continuous history of union suppression, and without a major socialist party at the national level. Indeed, the countries that perform best on the Social Progress Index have multi- party systems, strong labor unions, a plethora of left-wing parties, and adhere to the social democratic model. In other words, politics explains why the United States did not develop a European-style welfare state. Political factors also explain why economic inequalities are so huge in the US and the middle class is shrinking; why the quality of America’s health care system is dead last when compared with other western, industrialized nations; why there are millions of homeless people; and why the infrastructure resembles that of a third-world country.
    [Show full text]
  • Schwab's Fraud Encyclopedia
    Fraud Encyclopedia Updated June 2018 Contents Introduction . 3 Scams . 26 How to use this Fraud Encyclopedia . 3 1 . Properties . 28 2. Romance/marriage/sweetheart/catfishing . 28 Email account takeover . 4 3 . Investments/goods/services . .. 29 1 . Emotion . 7 4 . Prizes/lotteries . 29 2 . Unavailability . 7 5 . IRS . 30 3 . Fee inquiries . 8 6 . Payments . 30 4 . Attachments . 9 Other cybercrime techniques . 31 5 . International wires . 10 1 . Malware . 33 6 . Language cues . 10 2 . Wi-Fi connection interception . 34 7 . Business email compromise . 11 3 . Data breaches . 35 Client impersonation and identity theft . 12 4 . Credential replay incident (CRI) . 37 1 . Social engineering . 14 5 . Account online compromise/takeover . 37 2. Shoulder surfing . 14 6 . Distributed denial of service (DDoS) attack . 38 3. Spoofing . 15 Your fraud checklist . 39 4 . Call forwarding and porting . 16 Email scrutiny . 39 5 . New account fraud . 16 Verbally confirming client requests . 40 Identical or first-party disbursements . 17 Safe cyber practices . 41 1 . MoneyLink fraud . 19 What to do if fraud is suspected . 42 2 . Wire fraud . .. 19 Schwab Advisor Center® alerts . 43 3 . Check fraud . 20 4 . Transfer of account (TOA) fraud . 20 Phishing . 21 1 . Spear phishing . 23 2 . Whaling . .. 24 3 . Clone phishing . 24 4 . Social media phishing . 25 CONTENTS | 2 Introduction With advances in technology, we are more interconnected than ever . But we’re also more vulnerable . Criminals can exploit the connectivity of our world and use it to their advantage—acting anonymously How to use this to perpetrate fraud in a variety of ways . Fraud Encyclopedia Knowledge and awareness can help you protect your firm and clients and guard against cybercrime.
    [Show full text]
  • Arxiv:1805.10105V1 [Cs.SI] 25 May 2018
    Effects of Social Bots in the Iran-Debate on Twitter Andree Thieltges, Orestis Papakyriakopoulos, Juan Carlos Medina Serrano, Simon Hegelich Bavarian School of Public Policy Technical University of Munich Abstract Ahmadinejad caused nationwide unrests and protests. As these protests grew, the Iranian government shut down for- 2018 started with massive protests in Iran, bringing back the eign news coverage and restricted the use of cell phones, impressions of the so called “Arab Spring” and it’s revolution- text-messaging and internet access. Nevertheless, Twitter ary impact for the Maghreb states, Syria and Egypt. Many reports and scientific examinations considered online social and Facebook “became vital tools to relay news and infor- mation on anti-government protest to the people inside and networks (OSN’s) such as Twitter or Facebook to play a criti- 1 cal role in the opinion making of people behind those protests. outside Iran” . While Ayatollah Khameini characterized the Beside that, there is also evidence for directed manipulation influence of Twitter as “deviant” and inappropriate on Ira- of opinion with the help of social bots and fake accounts. nian domestic affairs2, most of the foreign news coverage So, it is obvious to ask, if there is an attempt to manipulate hailed Twitter to be “a new and influential medium for social the opinion-making process related to the Iranian protest in movements and international politics” (Burns and Eltham OSN by employing social bots, and how such manipulations 2009). Beside that, there was already a critical view upon will affect the discourse as a whole. Based on a sample of the influence of OSN’s as “tools” to express political opin- ca.
    [Show full text]
  • A Fast Unsupervised Social Spam Detection Method for Trending Topics
    Information Quality in Online Social Networks: A Fast Unsupervised Social Spam Detection Method for Trending Topics Mahdi Washha1, Dania Shilleh2, Yara Ghawadrah2, Reem Jazi2 and Florence Sedes1 1IRIT Laboratory, University of Toulouse, Toulouse, France 2Department of Electrical and Computer Engineering, Birzeit University, Ramallah, Palestine Keywords: Twitter, Social Networks, Spam. Abstract: Online social networks (OSNs) provide data valuable for a tremendous range of applications such as search engines and recommendation systems. However, the easy-to-use interactive interfaces and the low barriers of publications have exposed various information quality (IQ) problems, decreasing the quality of user-generated content (UGC) in such networks. The existence of a particular kind of ill-intentioned users, so-called social spammers, imposes challenges to maintain an acceptable level of information quality. Social spammers sim- ply misuse all services provided by social networks to post spam contents in an automated way. As a natural reaction, various detection methods have been designed, which inspect individual posts or accounts for the existence of spam. These methods have a major limitation in exploiting the supervised learning approach in which ground truth datasets are required at building model time. Moreover, the account-based detection met- hods are not practical for processing ”crawled” large collections of social posts, requiring months to process such collections. Post-level detection methods also have another drawback in adapting the dynamic behavior of spammers robustly, because of the weakness of the features of this level in discriminating among spam and non-spam tweets. Hence, in this paper, we introduce a design of an unsupervised learning approach dedicated for detecting spam accounts (or users) existing in large collections of Twitter trending topics.
    [Show full text]
  • Workbook Please Visit the Last Page
    SUPER SEX S.U.P.E.R Sex is an acronym created by Shan Boodram over ten years ago to discuss the various elements that are crucial to a healthy and happy intimate life. Set your terms Use protection Pleasure Emotionally wise Responsible By combining traditional sex education, psychology and everyone’s right to individualism S.U.P.E.R Sex works as a base for anyone seeking to understand their own sexuality plus intimate needs and how those things can interact with others in a healthy, optimal way. For more information about Shan and the primary resources used to create this workbook please visit the last page Set Your Terms Let’s discuss boundaries, expectations, needs, wants, setting/ framing and intent Love Languages (circle two) Gifts Acts of Service Physical Touch Words of affirmation Quality Time Sex Language (circle two) Mental - a mental connection is necessary to create a satisfactory physical connection Direct - straight to the point is how you like it Cat - you have to be the one to approach someone, you prefer to initiate intimacy on your time and terms Sensual - the stage has to be set: laundry folded, place smelling clean, music playing, candles burning! You like all of the senses to be engaged before sexual intimacy occurs Negotiator - sex itself isn’t very motivating for you BUT if you know it can be used as a bargaining chip to achieve something you do want, they you are incentivized Kinsey Scale (circle one to represent your action and star one to represent your fantasy) X - asexual 0 - strictly heterosexual 1 - Mostly heterosexual and incidentally homosexual 2 - Heterosexual and more than incidentally homosexual 3 - Bisexual 4 - Homosexual and more than incidentally heterosexual 5 - Homosexual and incidentally heterosexual 6 - strictly homosexual Boundaries 1.
    [Show full text]
  • Wignall, Liam (2018) Kinky Sexual Subcultures and Virtual Leisure Spaces. Doctoral Thesis, University of Sunderland
    Wignall, Liam (2018) Kinky Sexual Subcultures and Virtual Leisure Spaces. Doctoral thesis, University of Sunderland. Downloaded from: http://sure.sunderland.ac.uk/id/eprint/8825/ Usage guidelines Please refer to the usage guidelines at http://sure.sunderland.ac.uk/policies.html or alternatively contact [email protected]. Kinky Sexual Subcultures and Virtual Leisure Spaces Liam Wignall A thesis submitted in partial fulfilment of the requirements of the University of Sunderland for the degree of Doctor of Philosophy February 2018 i | P a g e Abstract This study seeks to understand what kink is, exploring this question using narratives and experiences of gay and bisexual men who engage in kink in the UK. In doing so, contemporary understandings of the gay kinky subcultures in the UK are provided. It discusses the role of the internet for these subcultures, highlighting the use of socio-sexual networking sites. It also recognises the existence of kink dabblers who engage in kink activities, but do not immerse themselves in kink communities. A qualitative analysis is used consisting of semi-structured in-depth interviews with 15 individuals who identify as part of a kink subculture and 15 individuals who do not. Participants were recruited through a mixture of kinky and non-kinky socio-sexual networking sites across the UK. Complimenting this, the author attended kink events throughout the UK and conducted participant observations. The study draws on subcultural theory, the leisure perspective and social constructionism to conceptualise how kink is practiced and understood by the participants. It is one of the first to address the gap in the knowledge of individuals who practice kink activities but who do so as a form of casual leisure, akin to other hobbies, as well as giving due attention to the increasing presence and importance of socio-sexual networking sites and the Internet more broadly for kink subcultures.
    [Show full text]
  • Automated Tackling of Disinformation
    Automated tackling of disinformation STUDY Panel for the Future of Science and Technology European Science-Media Hub EPRS | European Parliamentary Research Service Scientific Foresight Unit (STOA) PE 624.278 – March 2019 EN Automated tackling of disinformation Major challenges ahead This study maps and analyses current and future threats from online misinformation, alongside currently adopted socio-technical and legal approaches. The challenges of evaluating their effectiveness and practical adoption are also discussed. Drawing on and complementing existing literature, the study summarises and analyses the findings of relevant journalistic and scientific studies and policy reports in relation to detecting, containing and countering online disinformation and propaganda campaigns. It traces recent developments and trends and identifies significant new or emerging challenges. It also addresses potential policy implications for the EU of current socio-technical solutions. ESMH | European Science-Media Hub AUTHORS This study was written by Alexandre Alaphilippe, Alexis Gizikis and Clara Hanot of EU DisinfoLab, and Kalina Bontcheva of The University of Sheffield, at the request of the Panel for the Future of Science and Technology (STOA). It has been financed under the European Science and Media Hub budget and managed by the Scientific Foresight Unit within the Directorate-General for Parliamentary Research Services (EPRS) of the Secretariat of the European Parliament. Acknowledgements The authors wish to thank all respondents to the online survey, as well as first draft, WeVerify, InVID, PHEME, REVEAL, and all other initiatives that contributed materials to the study. ADMINISTRATOR RESPONSIBLE Mihalis Kritikos, Scientific Foresight Unit To contact the publisher, please e-mail [email protected] LINGUISTIC VERSION Original: EN Manuscript completed in March 2019.
    [Show full text]
  • Political Astroturfing Across the World
    Political astroturfing across the world Franziska B. Keller∗ David Schochy Sebastian Stierz JungHwan Yang§ Paper prepared for the Harvard Disinformation Workshop Update 1 Introduction At the very latest since the Russian Internet Research Agency’s (IRA) intervention in the U.S. presidential election, scholars and the broader public have become wary of coordi- nated disinformation campaigns. These hidden activities aim to deteriorate the public’s trust in electoral institutions or the government’s legitimacy, and can exacerbate political polarization. But unfortunately, academic and public debates on the topic are haunted by conceptual ambiguities, and rely on few memorable examples, epitomized by the often cited “social bots” who are accused of having tried to influence public opinion in various contemporary political events. In this paper, we examine “political astroturfing,” a particular form of centrally co- ordinated disinformation campaigns in which participants pretend to be ordinary citizens acting independently (Kovic, Rauchfleisch, Sele, & Caspar, 2018). While the accounts as- sociated with a disinformation campaign may not necessarily spread incorrect information, they deceive the audience about the identity and motivation of the person manning the ac- count. And even though social bots have been in the focus of most academic research (Fer- rara, Varol, Davis, Menczer, & Flammini, 2016; Stella, Ferrara, & De Domenico, 2018), seemingly automated accounts make up only a small part – if any – of most astroturf- ing campaigns. For instigators of astroturfing, relying exclusively on social bots is not a promising strategy, as humans are good at detecting low-quality information (Pennycook & Rand, 2019). On the other hand, many bots detected by state-of-the-art social bot de- tection methods are not part of an astroturfing campaign, but unrelated non-political spam ∗Hong Kong University of Science and Technology yThe University of Manchester zGESIS – Leibniz Institute for the Social Sciences §University of Illinois at Urbana-Champaign 1 bots.
    [Show full text]
  • Applications Log Viewer
    4/1/2017 Sophos Applications Log Viewer MONITOR & ANALYZE Control Center Application List Application Filter Traffic Shaping Default Current Activities Reports Diagnostics Name * Mike App Filter PROTECT Description Based on Block filter avoidance apps Firewall Intrusion Prevention Web Enable Micro App Discovery Applications Wireless Email Web Server Advanced Threat CONFIGURE Application Application Filter Criteria Schedule Action VPN Network Category = Infrastructure, Netw... Routing Risk = 1-Very Low, 2- FTPS-Data, FTP-DataTransfer, FTP-Control, FTP Delete Request, FTP Upload Request, FTP Base, Low, 4... All the Allow Authentication FTPS, FTP Download Request Characteristics = Prone Time to misuse, Tra... System Services Technology = Client Server, Netwo... SYSTEM Profiles Category = File Transfer, Hosts and Services Confe... Risk = 3-Medium Administration All the TeamViewer Conferencing, TeamViewer FileTransfer Characteristics = Time Allow Excessive Bandwidth,... Backup & Firmware Technology = Client Server Certificates Save Cancel https://192.168.110.3:4444/webconsole/webpages/index.jsp#71826 1/4 4/1/2017 Sophos Application Application Filter Criteria Schedule Action Applications Log Viewer Facebook Applications, Docstoc Website, Facebook Plugin, MySpace Website, MySpace.cn Website, Twitter Website, Facebook Website, Bebo Website, Classmates Website, LinkedIN Compose Webmail, Digg Web Login, Flickr Website, Flickr Web Upload, Friendfeed Web Login, MONITOR & ANALYZE Hootsuite Web Login, Friendster Web Login, Hi5 Website, Facebook Video
    [Show full text]
  • Sexual Arousal, Orgasm and Pleasure, As Much As Possible Using a Prospective, Laboratory Paradigm
    AN INVESTIGATION OF SELECTED TRADITIONAL ASSUMPTIONS ABOUT SEXUAL AROUSAL, ORGASM AND PLEASURE LAUREL Q. P. PATERSON B.A. Department of Psychology McGill University Montréal, Québec, Canada April 2013 A thesis submitted to McGill University in partial fulfillment of the requirements of the degree of Doctor of Philosophy Laurel Q. P. Paterson (2013) ACKNOWLEDGEMENTS I would like to express my gratitude to the many people who have helped me complete this thesis. First and foremost, I would like to thank my supervisor, Dr. Irv Binik, for his invaluable guidance and support throughout my PhD, and for his enthusiasm about bringing the study of orgasm back into the laboratory. I have been fortunate to share my laboratory with a wonderful group of researchers and friends: my fellow graduate students, Seth Davis, Melissa Farmer, Alina Kao, Tuuli Kukkonen, Marie-Andrée Lahaie, and Sabina Sarin; our dedicated research coordinators, Natalie Cartright, Marie Faaborg-Andersen, Jackie Huberman, Caroline Maykut, and Louise Overington; and my research assistants Ayelet Germanski, Ellie Shuo Jin, Natalie Stratton, and Lindsay Sycz. These bright women and man have stimulated research ideas, pushed me to overcome logistical obstacles, and made the laboratory a fun place to work. I will forever be grateful for the friendship and support of my classmates, Crystal Holly, Ilana Kronick, and Julie Mercier. Finally, I would like to thank my family and Peter for their unwavering support and encouragement. Throughout my graduate studies, I have been supported by generous research scholarships from the Canadian Institutes of Health Research (CIHR), McGill University Faculty of Graduate Studies, McGill University Health Centre Research Institute, Fonds de recherche du Québec – Santé (FRSQ), and the Natural Sciences and Engineering Research Council (NSERC).
    [Show full text]
  • Examples of Online Social Network Analysis Social Networks
    Examples of online social network analysis Social networks • Huge field of research • Data: mostly small samples, surveys • Multiplexity Issue of data mining • Longitudinal data McPherson et al, Annu. Rev. Sociol. (2001) New technologies • Email networks • Cellphone call networks • Real-world interactions • Online networks/ social web NEW (large-scale) DATASETS, longitudinal data New laboratories • Social network properties – homophily – selection vs influence • Triadic closure, preferential attachment • Social balance • Dunbar number • Experiments at large scale... 4 Another social science lab: crowdsourcing, e.g. Amazon Mechanical Turk Text http://experimentalturk.wordpress.com/ New laboratories Caveats: • online links can differ from real social links • population sampling biases? • “big” data does not automatically mean “good” data 7 The social web • social networking sites • blogs + comments + aggregators • community-edited news sites, participatory journalism • content-sharing sites • discussion forums, newsgroups • wikis, Wikipedia • services that allow sharing of bookmarks/favorites • ...and mashups of the above services An example: Dunbar number on twitter Fraction of reciprocated connections as a function of in- degree Gonçalves et al, PLoS One 6, e22656 (2011) Sharing and annotating Examples: • Flickr: sharing of photos • Last.fm: music • aNobii: books • Del.icio.us: social bookmarking • Bibsonomy: publications and bookmarks • … •“Social” networks •“specialized” content-sharing sites •Users expose profiles (content) and links
    [Show full text]
  • Recent Trends in Online Foreign Influence Efforts
    Recent Trends in Online Foreign Influence Efforts Diego A. Martin, Jacob N. Shapiro, Michelle Nedashkovskaya Woodrow Wilson School of Public and International Affairs Princeton University Princeton, New Jersey, United States E-mail: [email protected] Email: [email protected] E-mail: [email protected] Abstract: Foreign governments have used social media to influence politics in a range of countries by promoting propaganda, advocating controversial viewpoints, and spreading disinformation. We analyze 53 distinct foreign influence efforts (FIEs) targeting 24 different countries from 2013 through 2018. FIEs are defined as (i) coordinated campaigns by one state to impact one or more specific aspects of politics in another state (ii) through media channels, including social media, (iii) by producing content designed to appear indigenous to the target state. The objective of such campaigns can be quite broad and to date have included influencing political decisions by shaping election outcomes at various levels, shifting the political agenda on topics ranging from health to security, and encouraging political polarization. We draw on more than 460 media reports to identify FIEs, track their progress, and classify their features. Introduction Information and Communications Technologies (ICTs) have changed the way people communi- cate about politics and access information on a wide range of topics (Foley 2004, Chigona et al. 2009). Social media in particular has transformed communication between leaders and voters by enabling direct politician-to-voter engagement outside traditional avenues, such as speeches and press conferences (Ott 2017). In the 2016 U.S. presidential election, for example, social media platforms were more widely viewed than traditional editorial media and were central to the campaigns of both Democratic candidate Hillary Clinton and Republican candidate Donald Trump (Enli 2017).
    [Show full text]