Health Misinformation and the Power of Narrative Messaging in the Public Sphere Timothy Caulfield, Alessandro R. Marcon, Blake M
Total Page:16
File Type:pdf, Size:1020Kb
Health Misinformation and the Power of Narrative Messaging in the Public Sphere Timothy Caulfield, Alessandro R. Marcon, Blake Murdoch, Jasmine M. Brown, Sarah Tinker Perrault, Jonathan Jerry, Jeremy Snyder, Samantha J. Anthony, Stephanie Brooks, Zubin Master, Ubaka Ogbogu, Joshua Greenberg, Amy Zarzeczny, Robyn Hyde-Lay TC – Health Law Institute, Faculty of Law, School of Public Health, University of Alberta ARM – Health Law Institute, Faculty of Law, University of Alberta BM – Health Law Institute, Faculty of Law, University of Alberta JMB – Director of Communications, Institute of Health Economics; Graduate Student, Department of Psychiatry, University of Alberta STP – University Writing Program, University of California, Davis JJ – Office for Science and Society, McGill University JS – Faculty of Health Sciences, Simon Fraser University SJA - The Hospital for Sick Children, Co-Investigator, CNTRP SB – Department of Pediatrics, Faculty of Medicine, University of Alberta ZM - Biomedical Ethics Research Program, Mayo Clinic UO – Faculty of Law, Faculty of Pharmacy and Pharmaceutical Sciences, University of Alberta AZ - Johnson Shoyama Graduate School of Public Policy, University of Regina JG – School of Journalism and Communication, Faculty of Communication and Media Studies, Carleton University RHL - Health Law Institute, Faculty of Law, University of Alberta I. Introduction There is growing recognition that numerous social, economic and academic pressures can have a negative impact on representations of biomedical research. Empirical evidence indicates spin or interpretive bias is injected throughout the knowledge production process, including at the stage of grant writing, in the execution of the research, and in the production of the relevant manuscripts and institutional press releases [Boutron and Ravaud, 2018; Caulfield and Condit, 2012; Yavchitz et al., 2012]. The popular press, marketing forces, and online media also play significant roles in misrepresenting biomedical research [Caulfield and Condit, 2012; Kamenova, Reshef, and Caulfield, 2014; Turner and Knoepfler, 2016; Van Atteveldt, Van Aalderen-Smeets, Jacobi, and Ruigrok, 2014]. Here, the wilful or unintended dissemination of misinformation through increasingly interactive media platforms also stands as a very real and growing concern [Anderson and Rainie, 2017; Edelman, 2018] and presents a significant challenge to generating rational, evidence-based conversations about biomedicine generally, and its benefits and risks, in particular. Exacerbating these forces, numerous trends have emerged which further problematize the science communication landscape, especially in the context of health. In this commentary, we review several of the forces playing an increasingly pernicious role in how information is interpreted, shared and used. In particular, we focus on the role that narrative plays in communicating science and misinformation to explore how aspects of narrative are used in different social contexts and communication environments. While most people have an inherent understanding of storytelling, narrative has been defined as a type of communication that “describes the cause-and-effect relationships between events that take place over a particular time period that impact particular characters” [Dahlstrom, 2014]. Narrative is a powerful tool that can enhance engagement and understanding of both truths and falsehoods [Glaser, Garsoffky, and Schwan, 2009]. Online, and specifically in the context of misinformation (e.g. conspiracy narratives, pseudoscience, etc.), research has shown that the construction and dissemination of narratives requires the efforts of numerous participants with differing levels of engagement [Introne, Yildirim, Iandoli, Decook, J, and Elzeini, 2018]. It is dynamic, incorporating numerous discursive elements (multiple sources, hashtags, hyperlinks, memes, etc.) as well as flexible, adopting and discarding both story elements and participants as time passes [Introne, Yildirim, Iandoli, Decook, J, and Elzeini, 2018]. This commentary is not meant to be a comprehensive survey of emerging health communication developments or an exhaustive account of the role that narrative plays in that context. Rather, we highlight elements that have increasingly troubled health communication in recent times, and present creative responses that may help counter the negative trends. Indeed, as we will see, traditional methods of communication have in many ways failed the public, and changes in approach are required. II. Online Communities Much has been written about the growing influence of social media on public discourse, including in the context of health [Smailhodzic, Hooijsma, Boonstra, and Langley, 2016]. While traditional news outlets – such as newspapers – remain the dominant source of science information [Funk, Gottfried, and Mitchell, 2017], social media platforms, including Facebook, Twitter, Instagram, YouTube and Reddit, have become important sources of health information and sites for public engagement and community-building [Centola, 2010; De Choudhury and De, 2014; De Choudhury, Morris, and White, 2014; Fernández-Luque and Bau, 2015; Guilbeault, Becker, and Centola, 2018] . Studies show younger generations are more willing to share health information about themselves online than older generations, and people of all generations increasingly go online to seek others with similar health concerns or conditions for information and support [Graham, Cobb, and Cobb, 2016]. All age groups in North America use social media heavily [Gruzd, Jacobson, Mai, and Dubois, 2018; Smith and Anderson, 2018], so the influence of social networking platforms is likely to grow. We know, however, that the health and science information on these platforms is often problematic [Antheunis, Tates, and Nieboer, 2013; Madathil, Rivera-Rodriguez, Greenstein, and Gramopadhye, 2015]. Research has found, for example, that social media are used to spread harmful health messages, including, to cite just a few examples, antivaccine rhetoric [Dunn, Leask, Zhou, Mandl, and Coiera, 2015; Tomeny, Vargo, and El-Toukhy, 2017], misinformation about the Zika virus [Sharma, Yadav, Yadav, and Ferdinand, 2017] and Lyme disease [Basch et al., 2017], as well as Ebola-related prevention and treatment strategies [Oyeyemi, Gabarron, and Wynn, 2014]. Alarmingly, a 2018 study of millions of Twitter interactions over a ten-year period found that falsehoods diffused “farther, faster, deeper and more broadly” than the truth [Vosoughi, Roy, and Aral, 2018]. And while some recent research has indicated notions of the “echo chamber” might be overstated [Dubois and Blank, 2018; Flaxman, Goel, and Rao, 2013], online environments do exhibit polarization characteristics and are spaces where misinformation can spread virally [Del Vicario et al., 2016]. Information dissemination through the public is incredibly complex. While offline interactions in community settings such as clubs, organizations and social groups still carry significant weight, social media also clearly impacts bioscience communication. This impact affects not only the general public but also research funding and policy [Chafe, Born, Slutsky, and Laupacis, 2011; Pullman, Zarzeczny, and Picard, 2013]. Social media consists of diverse communication ecosystems, shaped by the algorithmic logistics of each particular platform, use by varied demographics, and the resulting creation of unique communication trends and patterns. A core component shared by these ecosystems is the dynamic of “social homophily”, which explains how people more commonly associate, and are more influenced by, those similar to themselves [Centola, 2010; Guilbeault et al., 2018; McPherson, Smith-Lovin, and Cook, 2001; Sunstein, 2017]. The clustering of individuals online into various communities places importance on the role that public intellectuals, celebrities, or influencers can play in knowledge transmission [Caulfield, 2015; Freberg, Graham, McGaughey, and Freberg, 2011; Khamis, Ang, and Welling, 2017], while platform algorithms are also becoming increasingly influential in shaping how information circulates [Muchnik, Aral, and Taylor, 2013; Striphas, 2016]. In more contentious social contexts, groups or communities invested in shaping public perceptions around particular topics can form, and at times, sharp divisions can emerge between various groups with differing perspectives. For example, research has shown that invested parties can create “discourse coalitions” [Hajer, 2002; Metze and Dodge, 2016], making use of communal terms and arguments to promote their causes [Attwell, Smith, and Ward, 2018; Marcon and Caulfield, 2017; Smart, 2011; Yardi and Boyd, 2010]. Here, heuristics such as confirmation bias [Stanovich and West, 2000] and information avoidance [Golman, Hagmann, and Loewenstein, 2017] can enforce established beliefs. Research has shown there is polarization between antagonistic discourse communities [Schmidt, Zollo, Scala, Betsch, and Quattrociocchi, 2018; Sunstein, 2017; Yardi and Boyd, 2010], raising questions about how to get differing groups to communicate effectively and also, importantly, how to accurately and truthfully disseminate information to all groups [Lazer et al., 2018; Sunstein, 2017; Yardi and Boyd, 2010]. This matter is becoming increasingly complicated with the rise of misinformation, which, in the modern media landscape, has taken on the popular reference or label of “fake news” [Lazer et al., 2018; Silverman, 2016] . Research has