Bots and Misinformation Spread on Social Media: Implications for COVID-19

Total Page:16

File Type:pdf, Size:1020Kb

Bots and Misinformation Spread on Social Media: Implications for COVID-19 JOURNAL OF MEDICAL INTERNET RESEARCH Himelein-Wachowiak et al Viewpoint Bots and Misinformation Spread on Social Media: Implications for COVID-19 McKenzie Himelein-Wachowiak1, BA; Salvatore Giorgi1,2, MSc; Amanda Devoto1, PhD; Muhammad Rahman1, PhD; Lyle Ungar2, PhD; H Andrew Schwartz3, PhD; David H Epstein1, PhD; Lorenzo Leggio1, MD, PhD; Brenda Curtis1, MSPH, PhD 1Intramural Research Program, National Institute on Drug Abuse, Baltimore, MD, United States 2Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, United States 3Department of Computer Science, Stony Brook Unversity, Stony Brook, NY, United States Corresponding Author: Brenda Curtis, MSPH, PhD Intramural Research Program National Institute on Drug Abuse 251 Bayview Blvd Suite 200 Baltimore, MD, 21224 United States Phone: 1 443 740 2126 Email: [email protected] Abstract As of March 2021, the SARS-CoV-2 virus has been responsible for over 115 million cases of COVID-19 worldwide, resulting in over 2.5 million deaths. As the virus spread exponentially, so did its media coverage, resulting in a proliferation of conflicting information on social media platformsÐa so-called ªinfodemic.º In this viewpoint, we survey past literature investigating the role of automated accounts, or ªbots,º in spreading such misinformation, drawing connections to the COVID-19 pandemic. We also review strategies used by bots to spread (mis)information and examine the potential origins of bots. We conclude by conducting and presenting a secondary analysis of data sets of known bots in which we find that up to 66% of bots are discussing COVID-19. The proliferation of COVID-19 (mis)information by bots, coupled with human susceptibility to believing and sharing misinformation, may well impact the course of the pandemic. (J Med Internet Res 2021;23(5):e26933) doi: 10.2196/26933 KEYWORDS COVID-19; coronavirus; social media; bots; infodemiology; infoveillance; social listening; infodemic; spambots; misinformation; disinformation; fake news; online communities; Twitter; public health articles focusing on COVID-19 has also grown exponentially; Introduction more research articles about the disease were published in the Globally, 2020 has been characterized by COVID-19, the first 4 months of the COVID-19 pandemic than throughout the disease caused by the SARS-CoV-2 virus. As of March 2021, entirety of the severe acute respiratory syndrome (SARS) and the COVID-19 pandemic has been responsible for over 115 Middle East respiratory syndrome (MERS) pandemics combined million documented cases, resulting in over 2.5 million deaths. [4]. Unfortunately, this breadth, and the speed with which The United States accounts for 24.9% of the world's COVID-19 information can travel, sets the stage for the rapid transmission cases, more than any other country [1]. of misinformation, conspiracy theories, and ªfake newsº about the pandemic [5]. One study found that 33% of people in the As the virus spread across the United States, media coverage United States report having seen ªa lotº or ªa great dealº of and information from online sources grew along with it [2]. false or misleading information about the virus on social media Among Americans, 72% report using an online news source [3]. Dr Tedros Adhanom Ghebreyesus, the Director-General of for COVID-19 information in the last week, with 47% reporting the World Health Organization, referred to this accelerated flow that the source was social media [3]. The number of research https://www.jmir.org/2021/5/e26933 J Med Internet Res 2021 | vol. 23 | iss. 5 | e26933 | p. 1 (page number not for citation purposes) XSL·FO RenderX JOURNAL OF MEDICAL INTERNET RESEARCH Himelein-Wachowiak et al of information about COVID-19, much of it inaccurate, as an conception that all bots spam or spread malware, but this is not ªinfodemicº [6]. the case. Some bots are benign, like the Twitter account @big_ben_clock, which impersonates the real Big Ben clock Though the pandemic is ongoing, evidence is emerging by tweeting the time every hour [21]. Others have even been regarding COVID-19 misinformation on social media. Rumors used for social good, such as Botivist, which is a Twitter bot have spread about the origin of the virus, potential treatments platform used for recruiting volunteers and donations [22]. or protections, and the severity and prevalence of the disease. Groups of bots can function in coordination with each other, In one sample of tweets related to COVID-19, 24.8% of tweets forming what are called botnets [23]. One botnet of roughly included misinformation and 17.4% included unverifiable 13,000 bot accounts was observed tweeting about Brexit, with information [7]. The authors found no difference in engagement most of these bot accounts disappearing from Twitter shortly patterns with misinformation and verified information, after the vote [24]. Bots of all types are ubiquitous on social suggesting that myths about the virus reach as many people on media and have been studied on Reddit [25,26], Facebook [27], Twitter as truths. A similar study demonstrated that fully false YouTube [28], and Twitter [29], among other platforms. claims about the virus propagated more rapidly and were more frequently liked than partially false claims. Tweets containing Given their large variety, bots are often organized into false claims also had less tentative language than valid claims subclasses, a selection of which we discuss here. Content [8]. polluters are one subclass; these are ªaccounts that disseminate malware and unsolicited content.º Traditional spambots, another This trend of misinformation emerging during times of subclass, are ªdesigned to be recognizable as botsº [30]. Social humanitarian crises and propagating via social media platforms botsÐa newer, more advanced type of bot [31-33]Ðuse ªa is not new. Previous research has documented the spread of computer algorithm that automatically produces content and misinformation, rumors, and conspiracies on social media in interacts with humans on social media, trying to emulate and the aftermath of the 2010 Haiti earthquake [9], the 2012 Sandy possibly alter their behavior.º There are also hybrid human-bot Hook Elementary School shooting [10], Hurricane Sandy in accounts (often called cyborgs) [34], which ªexhibit human-like 2012 [11], the 2013 Boston Marathon bombings [12,13], and behavior and messages through loosely structured, generic, the 2013 Ebola outbreak [14]. automated messages and from borrowed content copied from Misinformation can be spread directly by humans, as well as other sourcesº [35]. It is not always clear which category a bot by automated online accounts, colloquially called ªbots.º Social may fall into (eg, if a given social bot is also a cyborg). bots, which pose as real (human) users on platforms such as Various methods have been used to identify bots ªin the wild,º Twitter, use behaviors like excessive posting, early and frequent so as to build the data sets of known bots used to train retweeting of emerging news, and tagging or mentioning bot-detection algorithms. One method, the ªsocial honeypotº influential figures in the hope they will spread the content to [36], mimics methods traditionally used by researchers to their thousands of followers [15]. Bots have been found to monitor hacker activity [37] and email harvesting [38]. disproportionately contribute to Twitter conversations on Specifically, social honeypots are fake social media profiles set controversial political and public health matters, although there up with characteristics desirable to spammers, such as certain is less evidence they are biased toward one ªsideº of these issues demographics, relationship statuses, and profile pictures [39]. [16-18]. When bots attempt to spam the honeypots (by linking This paper combines a scoping review with an unpublished malware-infested content or pushing product websites), secondary analysis, similar in style to Leggio et al [19] and Zhu researchers can easily identify them. et al [20]. We begin with a high-level survey of the current bot literature: how bots are defined, what technical features Technical Features of Bots distinguish bots, and the detection of bots using machine Overview learning methods. We also examine how bots spread Features that distinguish bots from humans roughly fall into information, including misinformation, and explore the potential three categories: (1) network properties, such as hashtags and consequences with respect to the COVID-19 pandemic. Finally, friend/follower connections, (2) account activity and temporal we analyze and present the extent to which known bots are patterns, and (3) profile and tweet content. These feature publishing COVID-19±related content. categories have the advantage of being applicable across What Are Bots? different social media platforms [27]. Network Properties Before addressing issues surrounding the spread of Networks based on friend/follower connections, hashtag use, misinformation, we provide a definition of bots, describe their retweets, and mentions have been used in a number of studies typical features, and explain how detection algorithms identify that seek to identify social bots [40-43], exploiting network bots. homophily (ie, humans tend to follow other humans and bots Definition and Identification tend to follow other bots). As bots become more sophisticated, Bots, shorthand for ªsoftware robots,º come in a large variety network properties become less indicative of them; studies have of
Recommended publications
  • Arxiv:1805.10105V1 [Cs.SI] 25 May 2018
    Effects of Social Bots in the Iran-Debate on Twitter Andree Thieltges, Orestis Papakyriakopoulos, Juan Carlos Medina Serrano, Simon Hegelich Bavarian School of Public Policy Technical University of Munich Abstract Ahmadinejad caused nationwide unrests and protests. As these protests grew, the Iranian government shut down for- 2018 started with massive protests in Iran, bringing back the eign news coverage and restricted the use of cell phones, impressions of the so called “Arab Spring” and it’s revolution- text-messaging and internet access. Nevertheless, Twitter ary impact for the Maghreb states, Syria and Egypt. Many reports and scientific examinations considered online social and Facebook “became vital tools to relay news and infor- mation on anti-government protest to the people inside and networks (OSN’s) such as Twitter or Facebook to play a criti- 1 cal role in the opinion making of people behind those protests. outside Iran” . While Ayatollah Khameini characterized the Beside that, there is also evidence for directed manipulation influence of Twitter as “deviant” and inappropriate on Ira- of opinion with the help of social bots and fake accounts. nian domestic affairs2, most of the foreign news coverage So, it is obvious to ask, if there is an attempt to manipulate hailed Twitter to be “a new and influential medium for social the opinion-making process related to the Iranian protest in movements and international politics” (Burns and Eltham OSN by employing social bots, and how such manipulations 2009). Beside that, there was already a critical view upon will affect the discourse as a whole. Based on a sample of the influence of OSN’s as “tools” to express political opin- ca.
    [Show full text]
  • Automated Tackling of Disinformation
    Automated tackling of disinformation STUDY Panel for the Future of Science and Technology European Science-Media Hub EPRS | European Parliamentary Research Service Scientific Foresight Unit (STOA) PE 624.278 – March 2019 EN Automated tackling of disinformation Major challenges ahead This study maps and analyses current and future threats from online misinformation, alongside currently adopted socio-technical and legal approaches. The challenges of evaluating their effectiveness and practical adoption are also discussed. Drawing on and complementing existing literature, the study summarises and analyses the findings of relevant journalistic and scientific studies and policy reports in relation to detecting, containing and countering online disinformation and propaganda campaigns. It traces recent developments and trends and identifies significant new or emerging challenges. It also addresses potential policy implications for the EU of current socio-technical solutions. ESMH | European Science-Media Hub AUTHORS This study was written by Alexandre Alaphilippe, Alexis Gizikis and Clara Hanot of EU DisinfoLab, and Kalina Bontcheva of The University of Sheffield, at the request of the Panel for the Future of Science and Technology (STOA). It has been financed under the European Science and Media Hub budget and managed by the Scientific Foresight Unit within the Directorate-General for Parliamentary Research Services (EPRS) of the Secretariat of the European Parliament. Acknowledgements The authors wish to thank all respondents to the online survey, as well as first draft, WeVerify, InVID, PHEME, REVEAL, and all other initiatives that contributed materials to the study. ADMINISTRATOR RESPONSIBLE Mihalis Kritikos, Scientific Foresight Unit To contact the publisher, please e-mail [email protected] LINGUISTIC VERSION Original: EN Manuscript completed in March 2019.
    [Show full text]
  • Political Astroturfing Across the World
    Political astroturfing across the world Franziska B. Keller∗ David Schochy Sebastian Stierz JungHwan Yang§ Paper prepared for the Harvard Disinformation Workshop Update 1 Introduction At the very latest since the Russian Internet Research Agency’s (IRA) intervention in the U.S. presidential election, scholars and the broader public have become wary of coordi- nated disinformation campaigns. These hidden activities aim to deteriorate the public’s trust in electoral institutions or the government’s legitimacy, and can exacerbate political polarization. But unfortunately, academic and public debates on the topic are haunted by conceptual ambiguities, and rely on few memorable examples, epitomized by the often cited “social bots” who are accused of having tried to influence public opinion in various contemporary political events. In this paper, we examine “political astroturfing,” a particular form of centrally co- ordinated disinformation campaigns in which participants pretend to be ordinary citizens acting independently (Kovic, Rauchfleisch, Sele, & Caspar, 2018). While the accounts as- sociated with a disinformation campaign may not necessarily spread incorrect information, they deceive the audience about the identity and motivation of the person manning the ac- count. And even though social bots have been in the focus of most academic research (Fer- rara, Varol, Davis, Menczer, & Flammini, 2016; Stella, Ferrara, & De Domenico, 2018), seemingly automated accounts make up only a small part – if any – of most astroturf- ing campaigns. For instigators of astroturfing, relying exclusively on social bots is not a promising strategy, as humans are good at detecting low-quality information (Pennycook & Rand, 2019). On the other hand, many bots detected by state-of-the-art social bot de- tection methods are not part of an astroturfing campaign, but unrelated non-political spam ∗Hong Kong University of Science and Technology yThe University of Manchester zGESIS – Leibniz Institute for the Social Sciences §University of Illinois at Urbana-Champaign 1 bots.
    [Show full text]
  • Understanding Users' Perspectives of News Bots in the Age of Social Media
    sustainability Article Utilizing Bots for Sustainable News Business: Understanding Users’ Perspectives of News Bots in the Age of Social Media Hyehyun Hong 1 and Hyun Jee Oh 2,* 1 Department of Advertising and Public Relations, Chung-Ang University, Seoul 06974, Korea; [email protected] 2 Department of Communication Studies, Hong Kong Baptist University, Kowloon Tong, Kowloon, Hong Kong SAR, China * Correspondence: [email protected] Received: 15 July 2020; Accepted: 10 August 2020; Published: 12 August 2020 Abstract: The move of news audiences to social media has presented a major challenge for news organizations. How to adapt and adjust to this social media environment is an important issue for sustainable news business. News bots are one of the key technologies offered in the current media environment and are widely applied in news production, dissemination, and interaction with audiences. While benefits and concerns coexist about the application of bots in news organizations, the current study aimed to examine how social media users perceive news bots, the factors that affect their acceptance of bots in news organizations, and how this is related to their evaluation of social media news in general. An analysis of the US national survey dataset showed that self-efficacy (confidence in identifying content from a bot) was a successful predictor of news bot acceptance, which in turn resulted in a positive evaluation of social media news in general. In addition, an individual’s perceived prevalence of social media news from bots had an indirect effect on acceptance by increasing self-efficacy. The results are discussed with the aim of providing a better understanding of news audiences in the social media environment, and practical implications for the sustainable news business are suggested.
    [Show full text]
  • Recent Trends in Online Foreign Influence Efforts
    Recent Trends in Online Foreign Influence Efforts Diego A. Martin, Jacob N. Shapiro, Michelle Nedashkovskaya Woodrow Wilson School of Public and International Affairs Princeton University Princeton, New Jersey, United States E-mail: [email protected] Email: [email protected] E-mail: [email protected] Abstract: Foreign governments have used social media to influence politics in a range of countries by promoting propaganda, advocating controversial viewpoints, and spreading disinformation. We analyze 53 distinct foreign influence efforts (FIEs) targeting 24 different countries from 2013 through 2018. FIEs are defined as (i) coordinated campaigns by one state to impact one or more specific aspects of politics in another state (ii) through media channels, including social media, (iii) by producing content designed to appear indigenous to the target state. The objective of such campaigns can be quite broad and to date have included influencing political decisions by shaping election outcomes at various levels, shifting the political agenda on topics ranging from health to security, and encouraging political polarization. We draw on more than 460 media reports to identify FIEs, track their progress, and classify their features. Introduction Information and Communications Technologies (ICTs) have changed the way people communi- cate about politics and access information on a wide range of topics (Foley 2004, Chigona et al. 2009). Social media in particular has transformed communication between leaders and voters by enabling direct politician-to-voter engagement outside traditional avenues, such as speeches and press conferences (Ott 2017). In the 2016 U.S. presidential election, for example, social media platforms were more widely viewed than traditional editorial media and were central to the campaigns of both Democratic candidate Hillary Clinton and Republican candidate Donald Trump (Enli 2017).
    [Show full text]
  • Supplementary Materials For
    www.sciencemag.org/content/359/6380/1094/suppl/DC1 Supplementary Materials for The science of fake news David M. J. Lazer,*† Matthew A. Baum,* Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, Michael Schudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts, Jonathan L. Zittrain *These authors contributed equally to this work. †Corresponding author. Email: [email protected] Published 9 March 2018, Science 359, 1094 (2018) DOI: 10.1126/science.aao2998 This PDF file includes: List of Author Affiliations Supporting Materials References List of Author Affiliations David M. J. Lazer,1,2*† Matthew A. Baum,3* Yochai Benkler,4,5 Adam J. Berinsky,6 Kelly M. Greenhill,7,3 Filippo Menczer,8 Miriam J. Metzger,9 Brendan Nyhan,10 Gordon Pennycook,11 David Rothschild,12 Michael Schudson,13 Steven A. Sloman,14 Cass R. Sunstein,4 Emily A. Thorson,15 Duncan J. Watts,12 Jonathan L. Zittrain4,5 1Network Science Institute, Northeastern University, Boston, MA 02115, USA. 2Institute for Quantitative Social Science, Harvard University, Cambridge, MA 02138, USA 3John F. Kennedy School of Government, Harvard University, Cambridge, MA 02138, USA. 4Harvard Law School, Harvard University, Cambridge, MA 02138, USA. 5 Berkman Klein Center for Internet and Society, Cambridge, MA 02138, USA. 6Department of Political Science, Massachussets Institute of Technology, Cambridge, MA 02139, USA. 7Department of Political Science, Tufts University, Medford, MA 02155, USA. 8School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN 47405, USA. 9Department of Communication, University of California, Santa Barbara, Santa Barbara, CA 93106, USA.
    [Show full text]
  • Frame Viral Tweets.Pdf
    BIG IDEAS! Challenging Public Relations Research and Practice Viral Tweets, Fake News and Social Bots in Post-Factual Politics: The Case of the French Presidential Elections 2017 Alex Frame, Gilles Brachotte, Eric Leclercq, Marinette Savonnet [email protected] 20th Euprera Congress, Aarhus, 27-29th September 2018 Post-Factual Politics in the Age of Social Media 1. Algorithms based on popularity rather than veracity, linked to a business model based on views and likes, where novelty and sensationalism are of the essence; 2. Social trends of “whistle-blowing” linked to conspiracy theories and a pejorative image of corporate or institutional communication, which cast doubt on the neutrality of traditionally so-called expert figures, such as independent bodies or academics; 3. Algorithm-fuelled social dynamics on social networking sites which structure publics by similarity, leading to a fragmented digital public sphere where like-minded individuals congregate digitally, providing an “echo-chamber” effect susceptible to encourage even the most extreme political views and the “truths” upon which they are based. Fake News at the Presidential Elections Fake News at the Presidential Elections Fake News at the Presidential Elections Political Social Bots on Twitter Increasingly widespread around the world, over at least the last decade. Kremlin bot army (Lawrence Alexander on GlobalVoices.org; Stukal et al., 2017). DFRLab (Atlantic Council) Social Bots A “social bot” is: “a computer algorithm that automatically produces content and
    [Show full text]
  • Empirical Comparative Analysis of Bot Evidence in Social Networks
    Bots in Nets: Empirical Comparative Analysis of Bot Evidence in Social Networks Ross Schuchard1(&) , Andrew Crooks1,2 , Anthony Stefanidis2,3 , and Arie Croitoru2 1 Computational Social Science Program, Department of Computational and Data Sciences, George Mason University, Fairfax, VA 22030, USA [email protected] 2 Department of Geography and Geoinformation Science, George Mason University, Fairfax, VA 22030, USA 3 Criminal Investigations and Network Analysis Center, George Mason University, Fairfax, VA 22030, USA Abstract. The emergence of social bots within online social networks (OSNs) to diffuse information at scale has given rise to many efforts to detect them. While methodologies employed to detect the evolving sophistication of bots continue to improve, much work can be done to characterize the impact of bots on communication networks. In this study, we present a framework to describe the pervasiveness and relative importance of participants recognized as bots in various OSN conversations. Specifically, we harvested over 30 million tweets from three major global events in 2016 (the U.S. Presidential Election, the Ukrainian Conflict and Turkish Political Censorship) and compared the con- versational patterns of bots and humans within each event. We further examined the social network structure of each conversation to determine if bots exhibited any particular network influence, while also determining bot participation in key emergent network communities. The results showed that although participants recognized as social bots comprised only 0.28% of all OSN users in this study, they accounted for a significantly large portion of prominent centrality rankings across the three conversations. This includes the identification of individual bots as top-10 influencer nodes out of a total corpus consisting of more than 2.8 million nodes.
    [Show full text]
  • Disinformation, and Influence Campaigns on Twitter 'Fake News'
    Disinformation, ‘Fake News’ and Influence Campaigns on Twitter OCTOBER 2018 Matthew Hindman Vlad Barash George Washington University Graphika Contents Executive Summary . 3 Introduction . 7 A Problem Both Old and New . 9 Defining Fake News Outlets . 13 Bots, Trolls and ‘Cyborgs’ on Twitter . 16 Map Methodology . 19 Election Data and Maps . 22 Election Core Map Election Periphery Map Postelection Map Fake Accounts From Russia’s Most Prominent Troll Farm . 33 Disinformation Campaigns on Twitter: Chronotopes . 34 #NoDAPL #WikiLeaks #SpiritCooking #SyriaHoax #SethRich Conclusion . 43 Bibliography . 45 Notes . 55 2 EXECUTIVE SUMMARY This study is one of the largest analyses to date on how fake news spread on Twitter both during and after the 2016 election campaign. Using tools and mapping methods from Graphika, a social media intelligence firm, we study more than 10 million tweets from 700,000 Twitter accounts that linked to more than 600 fake and conspiracy news outlets. Crucially, we study fake and con- spiracy news both before and after the election, allowing us to measure how the fake news ecosystem has evolved since November 2016. Much fake news and disinformation is still being spread on Twitter. Consistent with other research, we find more than 6.6 million tweets linking to fake and conspiracy news publishers in the month before the 2016 election. Yet disinformation continues to be a substantial problem postelection, with 4.0 million tweets linking to fake and conspiracy news publishers found in a 30-day period from mid-March to mid-April 2017. Contrary to claims that fake news is a game of “whack-a-mole,” more than 80 percent of the disinformation accounts in our election maps are still active as this report goes to press.
    [Show full text]
  • Hacks, Leaks and Disruptions | Russian Cyber Strategies
    CHAILLOT PAPER Nº 148 — October 2018 Hacks, leaks and disruptions Russian cyber strategies EDITED BY Nicu Popescu and Stanislav Secrieru WITH CONTRIBUTIONS FROM Siim Alatalu, Irina Borogan, Elena Chernenko, Sven Herpig, Oscar Jonsson, Xymena Kurowska, Jarno Limnell, Patryk Pawlak, Piret Pernik, Thomas Reinhold, Anatoly Reshetnikov, Andrei Soldatov and Jean-Baptiste Jeangène Vilmer Chaillot Papers HACKS, LEAKS AND DISRUPTIONS RUSSIAN CYBER STRATEGIES Edited by Nicu Popescu and Stanislav Secrieru CHAILLOT PAPERS October 2018 148 Disclaimer The views expressed in this Chaillot Paper are solely those of the authors and do not necessarily reflect the views of the Institute or of the European Union. European Union Institute for Security Studies Paris Director: Gustav Lindstrom © EU Institute for Security Studies, 2018. Reproduction is authorised, provided prior permission is sought from the Institute and the source is acknowledged, save where otherwise stated. Contents Executive summary 5 Introduction: Russia’s cyber prowess – where, how and what for? 9 Nicu Popescu and Stanislav Secrieru Russia’s cyber posture Russia’s approach to cyber: the best defence is a good offence 15 1 Andrei Soldatov and Irina Borogan Russia’s trolling complex at home and abroad 25 2 Xymena Kurowska and Anatoly Reshetnikov Spotting the bear: credible attribution and Russian 3 operations in cyberspace 33 Sven Herpig and Thomas Reinhold Russia’s cyber diplomacy 43 4 Elena Chernenko Case studies of Russian cyberattacks The early days of cyberattacks: 5 the cases of Estonia,
    [Show full text]
  • Artificial Intelligence and Countering Violent Extremism: a Primer
    Artificial Intelligence and Countering Violent Extremism: A Primer ByMarie Hxxnry Schroeter Mxxhm, Inxxs Oxxlxxmxxnxx, xxnd Fxxnxx Sxxngxxr GNET xxsis a xxspecial spxxcxxl project prxxjxxct delivered dxxlxxvxxrxxd by the International by thxx Intxxrnxxtxxnxxl Centre Cxxntrxx fxxrfor the thxx Study Stxxdy of Radicalisation,xxf Rxxdxxcxxlxxsxxtxxn, King’s College Kxxng’s London. Cxxllxxgxx Lxxndxxn. The author of this report is Marie Schroeter, Mercator Fellow on New Technology in International Relations: Potentials and Limitations of Artifical Intelligence to Prevent Violent Extremism Online The Global Network on Extremism and Technology (GNET) is an academic research initiative backed by the Global Internet Forum to Counter Terrorism (GIFCT), an independent but industry‑funded initiative for better understanding, and counteracting, terrorist use of technology. GNET is convened and led by the International Centre for the Study of Radicalisation (ICSR), an academic research centre based within the Department of War Studies at King’s College London. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing those, either expressed or implied, of GIFCT, GNET or ICSR. We would like to thank Tech Against Terrorism for their support with this report. CONTACT DETAILS For questions, queries and additional copies of this report, please contact: ICSR King’s College London Strand London WC2R 2LS United Kingdom T. +44 20 7848 2098 E. [email protected] Twitter: @GNET_research Like all other GNET publications, this report can be downloaded free of charge from the GNET website at www.gnet‑research.org. © GNET Artificial Intelligence and Countering Violent Extremism: A Primer Executive Summary Radicalisation can take place offline as well as online.
    [Show full text]
  • “Innovator, Fool, I Dream” : Deploying Effective Knowledge Management
    “Innovator, Fool, I Dream”1: Deploying Effective Knowledge Management Principles to Combat Information Overload in Law Firms By Ted Tjaden Director of Knowledge Management McMillan Binch Mendelsohn LLP (Toronto) Information overload is a serious problem for many knowledge workers. Increasingly, lawyers and law firms are exposed – willingly and unwillingly – to multiple sources and types of information through and from legal publishers, the Internet, intranets, newspapers, journals, emails, voice mails, faxes, and personal digital assistants. This barrage of information can and does lead to information overload. By definition, this information overload occurs when the amount of information received exceeds the ability to process that information.2 Studies clearly show that persons exposed to excessive amounts of information are less productive, prone to make bad decisions, and risk suffering serious stress-related diseases.3 Arguably, knowledge management (KM) is the way to combat information overload at the individual, team, and organizational level. 1 “Innovator, Fool, I Dream” is one of the many clever anagrams for the phrase “Information Overload” identified by Steve Witham in “Too Many Anagrams for “Information Overload” (January 2003), available online: <http://www.tiac.net/~sw/2003/01/overload.html> (Last visited: March 17, 2007). Other anagrams for information overload on that site include: Inane vomit flood roar; Forlorn media ovation; Moderator – fool, in vain; Mr. Noodle of Variation; I love a random info rot; Admiration of no lover; In an old favorite room, and Noontime-arrival food. Also – attributed at the foregoing site to David Britain <http://www.davidbrittan.com> – And no more trivia, fool. 2 Ingrid Mulder et al., “An Information Overload Study: Using Design Methods for Understanding” at 245 (Conference paper presented at the Australasian Computer-Human Interaction Conference 2006).
    [Show full text]