<<

2nd EXPLORING MEDIA ECOSYSTEMS CONFERENCE

Monday, MARCH 2nd

8:30 Registration

9:00 Exploring Media Ecosystems: Welcome remarks by Ethan Zuckerman, Director, Center for Civic Media, MIT Media Lab

9:30 Media Ecosystems Explorations:

§ Jean-Philippe Cointet, Sciences Po: The structure of the French media ecosystem and the Yellow vest movement § Fabio Giglietto, Università di Urbino Carlo Bo: Exploring Media Ecosystems Through Partisan Media Attention: Lessons Learned from Two Elections in the European Laboratory of Populisms § Jeremy Blackburn & Gianluca Stringhini, iDrama Lab: Computational Methods to Measure and Mitigate Weaponized Online Information

Jean-Philippe Cointet, Sciences Po: The structure of the French media ecosystem and the Yellow vest movement

Abstract: We study the structure of the French media ecosystem using a dual approach. We distinguish between an internal analysis drawn from the perspective of information producers and an external analysis on the audience side, which we approach using data. We use stochastic block model to identify the structure of both networks. The French media sphere exhibits a clear divide between legitimate media and counter-informational space, the former still attracting a very large majority of the attention. The structure of both networks largely overlaps, indicating that the circulation of misinformation on Twitter is likely to be confined to an isolated audience. We also perform ideology scaling using Twitter data and show that the ideology of news stories shared on Twitter is relatively homogeneously distributed. We therefore conclude that the French media ecosystem does not suffer from the same level of polarization as the US media ecosystem. We will finally consider how this organization is being challenged at another layer of the public space: conversations on . We will show, in particular, how Yellow Vests group pages redistribute order between the different media.

Fabio Giglietto, Università di Urbino Carlo Bo: Exploring Media Ecosystems Through Partisan Media Attention: Lessons Learned from Two Elections in the European Laboratory of Social Media Populisms

Abstract: During the last decade, Italy proved to be a fertile ground for the rise of digital parties and social media driven populisms. We explored the Italian digital media ecosystem by analyzing two sets of election-related political news stories and their patterns of social media engagement in the lead-up to 2018 general election and 2019 election for the . Overall, we documented multiple efforts carried on by partisan online communities to amplify the social media reach of certain political

March 2nd – 3rd 2020, Samberg Conference Center, MIT Supported by the Bill & Melinda Gates Foundation

news stories and skew the public narrative around certain issues. Leveraging on a method originally developed for the 2016 US Presidential Election, we found that partisans of Italian populist parties tend to rely on insular media sources (sources prevalently shared online by a political faction only) more than others and detected a specific pattern of interaction on Facebook (comments/shares ratio) possibly associated with strategic amplification and reframing. By looking at the news stories shared by multiple Facebook and accounts, pages and public groups we then identified several networks that Page | 2 performed what we call “Coordinated Link Sharing Behaviour” (repeatedly share the same links within a very short period of time). Both in 2018 and 2019, news stories shared by these networks of coordinated actors received a higher volume of engagement when compared with other stories. Furthermore, this practice was also found to be consistently associated with a high percentage of sources blacklisted by fact checkers as problematic. Besides presenting the results and the methodology used to estimate the media partisanship attention in a multi-party context and detect coordinated links sharing behavior, we discuss the implications of comparing different media ecosystems and outline paths for future studies.

Jeremy Blackburn & Gianluca Stringhini, iDrama Lab: Computational Methods to Measure and Mitigate Weaponized Online Information

Abstract: The Web has been one of the most impactful technologies ever, and over the past twenty years or so, has helped advance society in ways no one thought possible. Ubiquitous connectivity has enabled instant communication with anyone in the world. Social media has helped us strengthen existing relationships and form new ones. The vast amount of content on the Web has broadened our outlook and let us learn about things we never even knew existed. Unfortunately, along with these benefits has come a set of worrying problems. Powerful new communication mediums have been hijacked to spread false information and extremist ideology, and social media has been exploited to wage information warfare. Although these problems are not necessarily new, the scale and speed, coupled with advances in technology make them fundamentally different than past incarnations. To top it all off, we know very little about these socio-technical problems, making it difficult to even begin to solve them. In this talk, we will present our work towards measuring and understanding these new problems. We will show how seemingly isolated communities in the Web, have outsized influence in terms of spreading "fake news" throughout the greater Web. We will also show how Web born phenomena, i.e., memes, are created, evolve, and are harnessed to spread hateful ideology and propaganda. Finally, we will touch on some of the risks that researchers hoping to address these new socio-technical problems face.

11:00 Coffee break

11:30 Disinformation around the world

§ Samantha Bradshaw, Oxford Internet Institute: Social Media Manipulation: Algorithms, Bots, and Computational Propaganda § Lutz Güllner, European External Action Service: How to design policies to address disinformation – a European perspective

Samantha Bradshaw, Oxford Internet Institute: Social Media Manipulation: Algorithms, Bots, and Computational Propaganda

March 2nd – 3rd 2020, Samberg Conference Center, MIT Supported by the Bill & Melinda Gates Foundation

Abstract: Disinformation spread on social media – and the impact it has on democracy and elections – has become a major public interest issue. By examining the global organization of social media manipulation – including the actors, strategies, and tools employed to shape public opinion – this presentation will describe how social networking technologies have been co-opted to suppress fundamental human rights, polarize voters, and undermine trust in the media and government institutions in democracies around the world. Looking both broadly at global trends in elections – and Page | 3 specifically at case studies of platforms (ab)use – this presentation will provide a global overview of how social media are affecting civic participation and democratic outcomes.

Lutz Güllner, European External Action Service: How to design policies to address disinformation – a European perspective

Abstract: The made considerable progress in designing policies on how to address disinformation. In the run-up to the elections to the European Parliament in May 2019, a specific policy approach was designed in the context of an “Action Plan against Disinformation”. The plan is approaching the issue from a multi-angle approach, focusing on four key pillars: (1) analysis and exposure; (2) improve coordination and international cooperation; (3) platform regulation and (4) awareness raising and media literacy. This approach is currently being reviewed in the context of the development of new regulatory and political instruments. I will also present our insights into the threat development as well as highlighting some of the persistent challenges.

12:30 Lunch break

13:30 Disinformation across platforms

§ Kiran Garimella, Institute for Data, Systems, and Society, MIT: Misinformation on WhatsApp § Alexei Abrahams, Citizen Lab, University of Toronto: Twitter and MENA region § Kameswari Chebrolu, Indian Institute of Technology Mumbai: KauwaKaate Fact Checking Platform § Emily Ndulue & Aashka Dave, Center for Civic Media, MIT: Disinformation on Greta Thunberg § Gabrielle Lim, Shorenstein Center, Harvard University: The securitization of “fake news” in Malaysia and beyond

Kiran Garimella, Institute for Data, Systems, and Society, MIT: Misinformation on WhatsApp

Abstract: In this work, I will cover five aspects of our on-going research on WhatsApp. First, I will talk about our efforts to collect and analyze large amounts of public Whatsapp data from political groups in , India, and Indonesia. Second, I will discuss how such data could be useful for journalists, fact checking organizations and researchers, especially during high profile events like elections. Third, I will then take a deep dive into our work on studying image-based misinformation, showcasing novel ways images are being used to spread false information. Fourth, I will mention potential solutions to tackle the misinformation problem on WhatsApp by enabling on-device fact checking and show how it works surprisingly well, even with a simple design. I will also show the effectiveness of WhatsApp’s decision to reduce the forwarding limits. Finally, I will showcase how WhatsApp is being used for cross platform coordination, at scale by the ruling party in India.

March 2nd – 3rd 2020, Samberg Conference Center, MIT Supported by the Bill & Melinda Gates Foundation

Alexei Abrahams, Citizen Lab, University of Toronto: Twitter and MENA region

Abstract: Heralded as ‘liberation technology’ circa 2011 for its role as a facilitator of anti-authoritarian uprisings across the MENA region, social media has more recently become a contested space in which both revolutionary and counter-revolutionary social forces compete to distort public opinion in a manner that favors their respective interests. Drawing on a Twitter dataset of several hundred hashtags from Page | 4 both inside and outside of the MENA region collected since October 2019, we investigate the role of bots and influencers in shaping the discourse. Using several credible methods for detecting bots we find relatively low rates of bot prevalence, contrary to the prevailing view among scholars and journalists of the region. On the other hand, we find profoundly skewed distributions of influence, suggesting regional discourses (both pro- and anti-authoritarian) are monopolized by a narrow elite. We discuss implications for further research and for the prospects of democratization in the region.

Kameswari Chebrolu, Indian Institute of Technology Mumbai: KauwaKaate Fact Checking Platform

Abstract: Our platform KauwaKaate aims to empower fact checking organizations and fact-checkers by streamlining the involved workflow of fact checking. We employ an AI-human-hybrid approach, wherein we automate to the extent possible the manual processing involved in fact checking. Our platform allows general public to submit queries for fact checking via Whatsapp, Android app or Web interface. We extract these queries real-time from the relevant input channel and then process them to eliminate spam, determine the subject category (e.g politics, technology, health etc), language and importantly check if a similar query has already been fact checked in the Internet. Existing verdicts if any are returned to the general public via the relevant channel, else we merge similar queries and present the same as a task on a user-friendly dashboard along with auto-gathered evidence (e.g. similar images/videos with time-stamps, video transcription/text, contextual analysis across text/images etc) to a human fact checker to speed up his/her processing.In addition to the above main thread, we are also working on two other threads: 1) Given our large database of fact checked articles, employ it via gaming chatbots to educate public on how to identify fake news. 2) Analyze how political parties employ social media platforms (whatsapp, twitter and facebook) during campaigning and the extent of misinformation within.

Emily Ndulue & Aashka Dave, Center for Civic Media, MIT: Disinformation on Greta Thunberg

Abstract: Greta Thunberg’s rise to global prominence was notable for the climate-oriented discourse and activism it sparked, but also for the level of hateful or mis- and dis informative narratives that began to spread about her. We analyzed digital discourse across Facebook, Reddit, Twitter, and the open web to better understand the spread of such narratives and also to explore methods in multi-platform media analysis; these methods now inform our work on the multi-platform functionality of the Media Cloud system. In this presentation, we will share key findings from our case study and also discuss the considerations that underlie Media Cloud’s forthcoming new platform integrations.

Gabrielle Lim, Shorenstein Center, Harvard University: The securitization of “fake news” in Malaysia and beyond

Abstract: As governments across the globe struggle to address disinformation and media manipulation, fears of legislative overreach and censorship have also mounted. Malaysia, a country with a long history of censorship, exemplified these concerns when the creation and dissemination of “fake news” was

March 2nd – 3rd 2020, Samberg Conference Center, MIT Supported by the Bill & Melinda Gates Foundation

criminalized in 2018 under the Anti-Fake News Act (AFNA). Capitalizing on global discontent over social media and concerns over disinformation and misinformation, state-sponsored news coverage and public statements from politicians framed the issue as a matter of national security in a process known as securitization. But just as an issue or object can be securitized, it can also be counter-securitized. Throughout 2018, civil society actors in Malaysia working in close coordination with opposition politicians, attacked the legitimacy of the AFNA in a process of counter-securitization, primarily by Page | 5 attacking BN’s motivations and suggesting that the AFNA was a means to cover up corruption and critical voices.Cases like Malaysia illustrate how moments of legislative opportunism emerging from cross- border fears can lead to politically motivated legislation that threaten freedom of expression. However, as demonstrated by the counter-securitizing efforts and repeal of the AFNA, there are opportunities to such laws, even within constrained information environments. How “fake news” is addressed is therefore a globally interconnected problem that must consider not only the technology that enables its spread, but the sociopolitical context from which they emerge.

15:30 Coffee Break

16:00 David Rand, MIT: Reducing the spread of disinformation on social media using an accuracy nudge

Abstract: Why do people share false and misleading news, and what can be done about it? Using survey experiments, we demonstrate a disconnect between accuracy judgments and sharing intentions: even though true headlines are rated as much more accurate than false headlines, headline veracity has little impact on sharing. We argue against a “post-truth” interpretation, whereby people share false content because they care more about furthering their political agenda than they care about truth. Instead, we propose that the problem is attentional: most people do not want to spread misinformation, but the social media context focuses their attention on factors other than truth and accuracy. Indeed, when directly asked, most participants say it is important to only share accurate news. Accordingly, across three survey experiments and a field experiment on Twitter in which we messaged over 5000 users who had previously shared news from misleading websites, we find that subtly inducing people to think about the concept of accuracy increases the quality of the news they share. Together, these results challenge the popular preference-based narrative wherein people no longer care about accuracy. Instead, the results support our attention-based account wherein people fail to implement their preference for accuracy due to attentional constraints. Furthermore, our experimental design translates directly into a scalable anti-misinformation intervention that is easily implementable by social media platforms.

16:30 Yochai Benkler, Harvard University: Cautionary Notes on Disinformation & QA

17:30 Reception

March 2nd – 3rd 2020, Samberg Conference Center, MIT Supported by the Bill & Melinda Gates Foundation

Tuesday, MARCH 3rd

9:00 KEYNOTE: Lisa Nakamura, University of Michigan: Why Empathy Won’t Fix It: A Woman of Color Manifesto for the Internet.

Lisa Nakamura is the Gwendolyn Calvert Baker Collegiate Professor of American Culture at the University Page | 6 of Michigan, Ann Arbor, where she serves as the inaugural Director of the Digital Studies Initiative. She is the author of four books on race, gender, and the Internet.

Abstract: Digital platforms and the industries that create them have found themselves the object of strong critiques from without and within. The "TechWon'tBuildIt" hashtag and Google worker strikes signify a new awareness and resistance to the Internet's role as an oppressive social force. Many of these projects are now and have always been fundamentally racist and sexist. This talk will critique the emergence of empathy and ethics are solutionist strategies for an industry that is already defined by toxic technological solutionism through the use of woman of color feminist theory. It will conclude with a set of definitional criteria for determining when a platform or digital resource is racist and sexist along with specific ideas for how to address them.

9:45 Approaches to Critical Internet Studies

Joan Donovan, Shorenstein Center, Harvard University, in conversation with Lisa Nakamura, University of Michigan, Leslie K Jones, Rutgers University, and T.L Taylor, MIT.

10:45 Coffee break

11:15 Health-related Mis/Disinformation

§ Jonathan Corpus Ong, University of Massachusetts, Amherst: The Polarization of Disinformation Interventions: How racist speech and coronavirus conspiracy theory evade fact- checkers and platforms in the Philippines § Sunyou Kang, University of Southern California: Fentanyl Panic Goes Viral: The spread of misinformation about overdose risk from casual contact with Fentanyl in mainstream and social media § Ifeanyi M. Nsofor. Nigeria Health Watch: Forward this to 10 People: The epidemic of health misinformation in Nigeria

Jonathan Corpus Ong, University of Massachusetts, Amherst: The Polarization of Disinformation Interventions: How racist speech and coronavirus conspiracy theory evade fact-checkers and platforms in the Philippines.

Abstract: The coronavirus triggered a secondary epidemic of racism against mainland Chinese workers and tourists in the Philippines, combining surreptitious photography with offensive jokes about

March 2nd – 3rd 2020, Samberg Conference Center, MIT Supported by the Bill & Melinda Gates Foundation

misbehaving Chinese bodies and digital billboards on ride-sharing apps refusing service to "all Chinese". Informed by both historical practices of intra-ethnic racism as well as resentment of Duterte's political allegiance with the Chinese government, many progressive pundits and journalists have justified these digital expressions as nationalistic response, or "weapons of the weak". This talk reflects on the risks of a disinformation regulation landscape that has become highly polarized, where fact-checkers, journalists and platforms have overlooked the systematic production of anti-China extreme speech that could hurt Page | 7 multicultural relations but be politically advantageous for opposition forces challenging the popular support for Duterte. I consider the specific challenges of strategic amplification (Donovan & boyd 2019) to fact-checkers in populist regimes in the global South.

Sunyou Kang, University of Southern California: Fentanyl Panic Goes Viral: The spread of misinformation about overdose risk from casual contact with Fentanyl in mainstream and social media

Abstract: Between May-December 2019, we used Media Cloud to compile mainstream and social media content to characterize diffusion and excess visibility of misinformation about overdose risk from casual fentanyl exposure. Relevant content appeared in 551 articles (2015-2019). Misinformed media received at least 450,011 Facebook shares with potential reach of 69,751,705 users and received excess social media visibility by a factor of 15 compared to corrective content with 29,652 shares, potentially reaching 4,596,060 users. In the age of rapid information dissemination, evidence-informed tools are needed to change misinformed narratives in mainstream and social media which complicate overdose rescue and rationalize hyper-punitive laws.

Ifeanyi M. Nsofor. Nigeria Health Watch: Forward this to 10 People: The epidemic of health misinformation in Nigeria

Abstract: This presentation will explore the history of health misinformation in Nigeria, review cases, and understand its impacts. It will also explore how Nigeria Health Watch and other organizations counter health misinformation and discuss next steps in tackling health misinformation in Nigeria

12:30 Lunch break

13:30 Exploring the Dynamics of Health Disinformation

§ Jose Miguel Cansado, Alto Data Analytics: Disinformation & Public Health: The influential role of anti-vaccine narratives in the digital public sphere § Helge Giese, Konstanz University: The Social Diffusion of Risk Information: A psychological perspective § Emily K. Vraga, University of Minnesota: Correcting Health Misinformation on Social Media § Rosa Sicilia, Università Campus Bio-Medico di Roma: Level Rumor Detection on Twitter: Two examples in the health domain.

Jose Miguel Cansado, Alto Data Analytics: Disinformation & Public Health: The influential role of anti- vaccine narratives in the digital public sphere

March 2nd – 3rd 2020, Samberg Conference Center, MIT Supported by the Bill & Melinda Gates Foundation

Abstract: As social networks and emerging alternative media become increasingly accustomed to the spread of information regarding health, the effects of information disorder within this space may pose serious risks to individuals as well as to public health. Alto's research demonstrates how big data analysis of the digital public sphere can be used to uncover potential mis- and disinformation as well as to map narratives, key actors, and audiences exposed or vulnerable to disinformation. The research presented in this session will showcase the and impact of anti-vaccine narratives within the public digital Page | 8 global debate on vaccines.

Helge Giese, Konstanz University: The Social Diffusion of Risk Information: A psychological perspective

Abstract: People's preference for personal narratives and the lack of intuitiveness of scientific risk evaluations both contribute to a focus on misinformation in online searches for risk information. Furthermore, this biased focus may exacerbate in social exchange. To address these issues, we present an experimental paradigm testing how characteristics of individuals and features of risk information help explain its social transmission. Finally, we discuss potential avenues how unfavorable social dynamics may be attenuated.

. Emily K. Vraga, University of Minnesota: Correcting Health Misinformation on Social Media

Abstract: Social media are often criticized for facilitating the spread of health misinformation. However, other features of social media - especially in terms of the diversity of voices and viewpoints - may facilitate correction of misinformation as it arises. Observational correction occurs when the community seeing misinformation being corrected on social media updates their own attitudes as a result, and has been documented when the corrections come from algorithms, experts, or other users. We reflect on the benefits and drawbacks of observational correction as one mechanism to address misinformation at scale on social media.

Rosa Sicilia, Università Campus Bio-Medico di Roma: Level Rumor Detection on Twitter: Two examples in the health domain.

Abstract: Recent years have witnessed a drastic change in information diffusion that has become more and more immediate and effortless thanks to social media. Despite the clear advantages of this phenomenon, the absence of systematic control and moderation on these platforms easily leads to spread unreliable information. This is usually referred to as rumor, an unverified and instrumentally relevant statement in circulation. To prevent treacherous information from having social consequences, researchers have been directing considerable efforts in studying automatic systems able to recognize rumors. Most of the work focuses on macro-level analyses, i.e. the detection system considers as rumor news carried by a set of microblog posts rather than by an individual post. However, a micro-level analysis that considers the individual posts could be of major interest in specific domains where rumors concentrate, such as health. In this respect, this area is of large interest also because nowadays people often look for health knowledge and advice on online services, but not all these resources provide accurate or reliable information. On these grounds, the talk presents our current work, the methodologies used to tackle this issue, and the results achieved on two real-world test cases on health from Twitter social microblog.

March 2nd – 3rd 2020, Samberg Conference Center, MIT Supported by the Bill & Melinda Gates Foundation

15:00 Coffee Break

15:30 Joshua Tucker, New York University: Building the Field -- Q&A

Abstract: To some extent, it has been the best of times and the worst of times when it comes to digital Page | 9 media research, and in particular for understanding the impact of social media on politics in both democratic and non-democratic regimes. As the previous presentations in this conference have demonstrated, we are beginning to gain important insights into the dynamics of the communication revolution underway. However, despite these achievements and the widely recognized importance of this research, unique constraints have hindered the necessary concerted academic effort to answer the most important empirical questions. The key social media datasets to answer these important questions are not as readily available as were politically relevant datasets of years past. Moreover, unique legal barriers prevent analysis of such data, and related ethical and privacy concerns have arisen that have chilled academic inquiry. In this closing session, Professor Tucker will kick off an open discussion by making a case for paramount importance of ensuring outside access to social media data for independent researchers, address some of the existing obstacles to data access, and discuss the advantages and disadvantages of three different paths towards ensuring necessary data access in the future: working cooperatively with the platforms; designing data collection strategies, including “citizen science” efforts, that can function independently of platform cooperation; and advocating for government regulation to ensure data access for researchers committed to putting their findings in the public domain.

16:30 END of CONFERENCE

March 2nd – 3rd 2020, Samberg Conference Center, MIT Supported by the Bill & Melinda Gates Foundation

Page | 10

Speakers: Ethan Zuckerman is director of the Center for Civic Media at MIT, and an Associate Professor of the Practice at the MIT Media Lab. His research focuses on the use of media as a tool for social change, the role of technology in international development, and the use of new media technologies by activists. Jean-Philippe Cointet is working at the Sciences Po médialab, where he designs innovative computational sociology methods. He is specialized in text analysis, and is working on various kinds of corpora and sources questioning their socio- political dynamics. His research areas are diverse, ranging from social media analysis (Facebook public posts and comments) to science of science (data turn in oncology) or political processes mapping (political discourses, international negotiations) and frame analysis (press coverage of migration processes). He also participates in developing the CorText platform. He holds a PhD in Complex Systems and was trained as an engineer at Ecole Polytechnique. He is also affiliated with the research center INCITE, from Columbia University. Fabio Giglietto PhD, is Associate Professor at the Department of Communication Sciences, Humanities and International Studies at the University of Urbino Carlo Bo, where he teaches Internet Studies. His main research interests are theory of information, communication, and society with a specific focus on the relationship between social systems, media and digital technologies. On these topics, he has published extensively in journals such as the Journal of Communication, Information, Communication & Society, the Journal of Broadcasting and Electronic Media, Social Media + Society, and Social Science Computer Review. Jeremy Blackburn is an Assistant Professor in the Computer Science Department at Binghamton University. He is broadly interested in data science, with a focus on large-scale measurements and modeling. His largest line of work is in understanding jerks on the Internet. His research into understanding toxic behavior, hate speech, and fringe and extremist Web communities has been covered in the press by , , The Atlantic, The Wall Street Journal, the BBC and New Scientist, among others. Prior to his appointment at Binghamton, Jeremy was an assistant professor in the Department of Computer Science at the University of Alabama at Birmingham. Prior to that, Jeremy was an associate researcher at Telefonica research in Barcelona, Spain. Gianluca Stringhini is an Assistant Professor in the Electrical and Computer Engineering Department at Boston University. Before joining BU, he was faculty at University College London. Gianluca works in the area of data-driven security, analyzing large datasets to better understand complex malicious online operations and developing mitigation techniques to fight them. He was awarded a Facebook Secure the Internet Grant in 2018, a Google Faculty Research Award in 2015, the Symantec Research Labs Fellowship in 2012, and multiple Best Paper Awards, including one at IMC 2018. He has published in top security conferences such as CCS, NDSS, and USENIX Security, as well as top measurement and Web conferences such as IMC, ICWSM, and WWW. Samantha Bradshaw is a D.Phil. Candidate at the Oxford Internet Institute, University of Oxford, and is a Researcher on the Computational Propaganda Project. Her dissertation research examines the producers and drivers of disinformation, and

March 2nd – 3rd 2020, Samberg Conference Center, MIT Supported by the Bill & Melinda Gates Foundation

how technology—artificial intelligence, automation and big data analytics—enhance and constrain the spread of disinformation online. At the forefront of theoretical and methodological approaches for studying, analyzing and explicating the complex relationship between social media and democracy, Samantha’s research has helped advance academic debate, public understanding and policy discussions around the impact of technology on political expression and privacy. She has published research in several academic journals, and has had work featured in global media outlets such as the New York Times, the Washington Post, the BBC, and CNN. Samantha tweets from @sbradshaww. Page | 11 Lutz Güllner works in the European External Action Service, the EU's diplomatic arm, where he is Head of Division for Strategic Communications and Information Analysis. He leads a team of about 35 persons dealing with issues related to disinformation and foreign manipulative interference. In his work, he focuses on addressing disinformation threats for the EU and for the EU's neighbourhood region. Kiran Garimella is the Michael Hammer postdoctoral researcher at the Institute for Data, Systems, and Society at MIT. Before joining MIT, he was a postdoc at EPFL, Switzerland. His research focuses on using digital data for social good, including areas like polarization, misinformation and human migration. His work on studying and mitigating polarization on social media won the best student paper awards at WSDM 2017 and WebScience 2017. Kiran received his PhD at Aalto University, Finland, and Masters & Bachelors from IIIT Hyderabad, India. Prior to his PhD, he worked as a Research Engineer at Yahoo Research, Barcelona, and Qatar Computing Research Institute, Doha. More info: https://users.ics.aalto.fi/kiran/ Alexei Abrahams is a postdoctoral researcher at the Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, where he heads up a research agenda using big data science and game theory to make sense of disinformation and propaganda centered on the MENA region. Kameswari Chebrolu is a faculty member of the Department of Computer Science and Engineering at the Indian Institute of Technology, Bombay, India. She received her MS and Ph.D. degree in Electrical and Computer engineering from the University of California at San Diego. The focus of Kameswari's research work is on developing cutting-edge technology for real-world use. In the past, she has worked on projects aimed at providing Internet access to rural villages, enabling voice communication in remote tribal areas, monitoring the health of railway bridges. Currently she is focused on developing smart educational technology for classroom use, AI-human-hybrid technology that caters to the information needs of bottom of the pyramid and countering fake news on social media, with focus on Whatsapp. Emily Ndulue is a researcher with the Media Ecosystems Analysis Group, and jointly holds a research appointment in the Center for Civic Media at the MIT Media Lab. Her work revolves around using content analysis methods on news and social media data to inform the work of social good organizations. Recently, she has completed research with the Bill and Melinda Gates Foundation, the Ford Foundation, United Nations Foundation, Robert Wood Johnson Foundation, and the German Marshall Fund. A key component of her work is evaluating newsroom compliance with reporting standards that advance social good, such as reporting on mass shootings, suicide coverage, and language used in immigration reporting. She brings a background in evaluation and strategic planning, as well as public health research and practice. Emily has a master’s degree in health communication from Tufts University School of Medicine. Aashka Dave is a researcher/community manager for the Media Cloud project at the Center for Civic Media at the MIT Media Lab. With a background in digital journalism and media research, Aashka’s research focuses on media ecosystems, newsroom strategy considerations and disaster communications. Aashka holds an M.S. in Comparative Media Studies from MIT and has previously worked at the Harvard Kennedy School and The . Gabrielle Lim is a researcher at the Technology and Social Change Research Project (TaSC) at Harvard’s Shorenstein Center, as well as a fellow with Citizen Lab. Her research focuses primarily on information controls and security, with a focus on disinformation and media manipulation. David Rand is the Erwin H. Schell Professor and Associate Professor of Management Science and Brain and Cognitive Sciences at MIT. Bridging the fields of behavioral economics, cognitive science, and social psychology, David’s research combines behavioral experiments and online/field studies with mathematical/computational models to understand human decision- making. His work focuses on illuminating why people believe and share misinformation and “fake news”; understanding political psychology and polarization; and promoting human cooperation. His work has been published in peer-reviewed journals such Nature, Science, PNAS, the American Economic Review, Psychological Science, Management Science, and the

March 2nd – 3rd 2020, Samberg Conference Center, MIT Supported by the Bill & Melinda Gates Foundation

American Journal of Political Science, and has received widespread media attention. He has also written for popular press outlets including the New York Times, Wired, and New Scientist. He was named to Wired magazine’s Smart List 2012 of “50 people who will change the world,” chosen as a 2012 Pop!Tech Science Fellow, received the 2015 Arthur Greer Memorial Prize for Outstanding Scholarly Research, was selected as fact-checking researcher of the year in 2017 by the Poyner Institute’s International Fact-Checking Network, and received the 2020 FABBS Early Career Impact Award from the Society for Judgment and Decision Making. Papers he has coauthored have been awarded Best Paper of the Year in Experimental Page | 12 Economics, Social Cognition, and Political Methodology. Yochai Benkler is the Berkman Professor of Entrepreneurial Legal Studies at Harvard Law School, and faculty co-director of the Berkman Klein Center for Internet and Society at Harvard University. Since the 1990s he has played a role in characterizing the role of information commons and decentralized collaboration to innovation, information production, and freedom in the networked economy and society. His books include The Wealth of Networks: How social production transforms markets and freedom (Yale University Press 2006), which won academic awards from the American Political Science Association, the American Sociological Association, and the McGannon award for social and ethical relevance in communications. In 2012 he received a lifetime achievement award from Oxford University in recognition of his contribution to the study and public understanding of the Internet and information goods. His work is socially engaged, winning him the Ford Foundation Visionaries Award in 2011, the Electronic Frontier Foundation’s Pioneer Award for 2007, and the Public Knowledge IP3 Award in 2006. It is also anchored in the realities of markets, cited as “perhaps the best work yet about the fast moving, enthusiast-driven Internet” by the Financial Times and named best business book about the future in 2006 by Strategy and Business. Benkler has advised governments and international organizations on innovation policy and telecommunications, and serves on the boards or advisory boards of several nonprofits engaged in working towards an open society. His work can be freely accessed at http://www.benkler.org. Dr. Joan Donovan is Director of the Technology and Social Change (TaSC) Research Project at the Shorenstein Center. Dr. Donovan leads the field in examining internet and technology studies, online extremism, media manipulation, and disinformation campaigns. Lisa Nakamura is the Gwendolyn Calvert Baker Collegiate Professor of American Culture at the University of Michigan, Ann Arbor, where she serves as the inaugural Director of the Digital Studies Initiative. She is the author of four books on race, gender, and the Internet. Leslie Jones is an Assistant Professor in the Department of Sociology at Rutgers New Brunswick, specializing in social movements. She draws extensively on the fields of race and gender, critical race theory, and online social media in her study of collective mobilization. In her dissertation, she argued that Black women are forming intellectual salons through online social media, in which they are making groundbreaking theoretical contributions toward the public understanding of race and gender. Leslie is an interdisciplinary scholar that is active in the digital humanities and digital sociologies communities. T.L. Taylor is Professor of Comparative Media Studies and co-founder and Director of Research for AnyKey, an organization dedicated to supporting and developing fair and inclusive esports. She is a qualitative sociologist who has focused on internet and game studies for over two decades. Dr. Taylor’s research explores the interrelations between culture and technology in online leisure environments. Jonathan Corpus Ong (PhD, Cambridge) is Associate Professor of Global Digital Media at the University of Massachusetts - Amherst. He is Co-Editor-in-Chief of the 20-year-old media studies journal Television & New Media. Jonathan has published 2 books and over 20 journal articles in his research areas of humanitarian technologies, digital politics, and media ethics. His research on the political trolling industries in Southeast Asia applies ethnography to advocate for "process-oriented" approaches to fighting disinformation and promoting transparency in political marketing. His 2018 report “Architects of Networked Disinformation: Behind the Scenes of Troll Accounts and Fake News Production in the Philippines” informed the development of new social media campaign monitoring rules for the 2019 Philippine elections. He has written for or been featured in portals such as , The New York Times, and Buzzfeed. Sunyou Kang is an undergraduate student at the University of Southern California pursuing her Bachelor’s in Health Promotion & Disease Prevention. She is also the Research Administrator for the Health in Justice Action Lab at Northeastern University and a Staff Research Associate at the UC San Diego School of Medicine.

March 2nd – 3rd 2020, Samberg Conference Center, MIT Supported by the Bill & Melinda Gates Foundation

Dr. Ifeanyi McWilliams Nsofor is a graduate of the Liverpool School of Tropical Medicine, a Senior New Voices Fellow at the Aspen Institute and a Senior Atlantic Fellow for Health Equity at George Washington University. He is the CEO of EpiAFRIC and Director of Policy and Advocacy at Nigeria Health Watch. Ifeanyi is a Thought Leader in Global Health. He has written 37 opinion pieces for different publishers including, BioMed Central’s Bugbitten Blog, , Inter Press Service, Project Syndicate, The Hill, Devex, African Arguments, AllAfrica, The Globe Post, Nigeria Health Watch and Vanguard Nigeria. All of Ifeanyi’s opinion pieces are available here. Ifeanyi’s health communications and health advocacy efforts are hinged on Page | 13 the University of Global Health Equity’s mantra that, to achieve equity in healthcare, there must be equity in health education. Ifeanyi is married to Omegie and they have 2 daughters (Yagazie and Chimamanda) and a dog, Simba. Jose Miguel Cansado is the General Director of Alto Analytics. Since its launch in 2013, Jose Miguel has been part of Alto, helping large organizations implement a data-driven culture to inform strategy and decisions with advanced intelligence. Prior to that, Jose Miguel developed his international career at IBM and Alcatel-Lucent (now Nokia), including eight years in Asia (Shanghai and Kuala Lumpur) with management positions in Multimedia and Mobile Communications. Author of “Digital Renaissance” (2012), Jose Miguel is a telecommunications engineer and holds a master’s in Telecommunications Marketing from INSEAD, as well as Executive MBA from IE Business School. Dr. Helge Giese is currently a research scientist at the chair of Social Psychology and Decision Sciences at the University of Konstanz, . His research interests include social influence and opinion dynamics within social networks, but also risk perception and health communication Emily K. Vraga is an associate professor at the Hubbard School of Journalism and Mass Communication at the University of Minnesota, where she holds the Don and Carole Larson Professorship in Health Communication. Her research tests techniques to correct health misinformation on social media, to limit biased processing of news messages, and to encourage attention to more diverse content online. She has published over 60 articles in leading communication, journalism, political, and health journals. Rosa Sicilia was born in Cosenza, Italy, in 1993. She graduated with honours in Biomedical Engineering at Università Campus Bio-Medico, Rome, in 2016 and in the same year she started a Ph.D. in Biomedical Engineering (Computer Science area) in the Computer Systems & Bioinformatics Laboratory, Department of Engineering of the same University. Nowadays, she is a PhD candidate with a final dissertation on “mLevel Rumour Detection on Twitter”, which presents a machine learning system for automatic rumour detection in the health domain. Beside the main research interest on machine learning, data mining and social network analysis fields, she is currently working also on radiopathomics, i.e. the prediction of cancer patients prognosis through radiopathological image analysis, and on multivariate time series analysis Joshua A. Tucker is Professor of Politics, affiliated Professor of Russian and Slavic Studies, and affiliated Professor of Data Science at New York University. He is the Director of NYU’s Jordan Center for Advanced Study of Russia, a co-Director of the NYU Social Media and Political Participation (SMaPP) laboratory, and a co-author/editor of the award-winning politics and policy blog The Monkey Cage at The Washington Post. He serves on the advisory board of the American National Election Study, the Comparative Study of Electoral Systems, and numerous academic journals, and was the co-founder and co-editor of the Journal of Experimental Political Science. His original research was on mass political behavior in post-communist countries, including voting and elections, partisanship, public opinion formation, and participation. More recently, he has been at the forefront of the newly emerging field of study of the relationship between social media and politics. His research in this area has included studies on the effects of network diversity on tolerance, partisan echo chambers, online hate speech, the effects of exposure to social media on political knowledge, online networks and protest, disinformation and fake news, how authoritarian regimes respond to online opposition, and Russian bots and trolls. An internationally recognized scholar, he has served as a keynote speaker for conferences in Sweden, Denmark, Italy, Brazil, the Netherlands, Russia, and the United States, and has given over 100 invited research presentations at top domestic and international universities and research centers over the past 10 years. His research has appeared in over two-dozen scholarly journals and has been supported by approximately $16 million in funding from a wide range of philanthropic foundations, as well as the National Science Foundation. His most recent book is the co-authored Communism’s Shadow: Historical Legacies and Contemporary Political Attitudes (Princeton University Press, 2017), and he is the co-editor of the forthcoming Social Media and Democracy: The State of the Field (Cambridge University Press, 2020). he/him/his, @j_a_tucker

March 2nd – 3rd 2020, Samberg Conference Center, MIT Supported by the Bill & Melinda Gates Foundation