Digital Disinformation and Election Integrity: Benchmarks for Regulation
Total Page:16
File Type:pdf, Size:1020Kb
ISSN (Online) - 2349-8846 Digital Disinformation and Election Integrity: Benchmarks for Regulation SAHANA UDUPA Sahana Udupa ([email protected]) is Professor of Media Anthropology at the Ludwig- Maximilians-Universität (LMU) Munich, Germany. Vol. 54, Issue No. 51, 28 Dec, 2019 As critical events in democratic life, elections pose extraordinary challenges to the autonomy of public opinion. This article outlines some of the regulatory challenges that have emerged in the wake of digital media expansion in India, and argues that the self-regulatory mechanism that was developed during the 2019 national elections is insufficient in addressing problems of online extreme speech, algorithmic bias, and proxy campaigns. Building on the electoral management model proposed by Netina Tan, it suggests that a critical overview of ongoing efforts can help determine the readiness of Indian regulatory structures to respond to digital disruptions during elections and emphasises the need for a co-regulatory mechanism. It is commonplace to acknowledge that political parties and politicians intensify their efforts to influence public mood and voter loyalties during elections. Democracies, then, not only become a theatre for maverick speech and public performances, but also a testing ground for regulatory interventions. The expansion of digital media in India in the last decade has placed new pressures on regulatory efforts at containing malicious rumours and disinformation during elections. These tensions reflect similar developments around digital social media and electoral processes around the world. Globally, digital campaigns have raised concerns around data privacy and microtargeting, as well as the use of bots and ISSN (Online) - 2349-8846 algorithmic sorting as new ways to sabotage political discourse (Ong and Cabanes 2018; Bradshaw and Howard 2017). The 2019 general elections in India exposed several limits and loopholes in the existing regulatory structures around media-enabled campaigns. During the elections, digital social media and messaging services emerged as a battleground for political parties to experiment with new tactics of content creation and distribution. Building on years of preparation, the Bharatiya Janata Party (BJP) was at the forefront in organising novel ways of creating and distributing election content. The party continued to rely on its office bearers, proxy workers, and volunteers to navigate different levels of content veracity and creative messaging. Multiple strategies of content creation were at work: from straightforward “party line” slogans to deep message ambiguation where words mutate as they travel and accumulate sinister meanings within specific cultural and political contexts of reception. Innovations were also striking on the distribution side. If office bearers with designated roles as social media coordinators closely monitored the “official channel” of content flow from national to local levels, proxy workers and volunteers assembled vast networks of distribution based on personal connections and snowballing techniques. These networks were further augmented by the potential virality of fear-inducing and humour-laden extreme speech that targeted communities based on religion, caste and gender (Udupa 2019). The BJP’s first-mover advantage in social media campaigning was challenged by other political parties during the run-up to the elections. Stepping up its efforts, the Indian National Congress (INC) re-energised several of its party units, including a dedicated “research team” to prepare “counters” to the BJP and other parties. Full-fledged social media teams of the Congress and regional political parties got on to the same game of composing witty, satirical, and retaliatory messages. Alongside party-based efforts, individual politicians increasingly recruited social media campaigners for online promotions. It was common to witness social media strategists accompanying politicians during campaign visits for ward-level mobilisation. These strategists ranged from a single individual who would follow the leader with a camera to upload the video the very next minute on Twitter, YouTube, and Facebook to small- and mid- sized enterprises that had paid teams working on social media promotions. Media reports also exposed clandestine operations of proxy companies that created toxic digital campaign content aimed against religious minorities and opposition party leaders (Poonam and Bansal 2019). Even as Facebook, WhatsApp and Twitter came under the radar for election content volatilities, TikTok, ShareChat, Helo and other mid-range platforms started providing new means to share political content and peddle partisan positions. The vast complexity of content creation and distribution channels, together with the speed of circulation in the digital age, placed enormous demands on regulatory mechanisms during the national elections. How did the regulatory system respond, and what were the limitations? ISSN (Online) - 2349-8846 Voluntary Code of Ethics The Election Commission of India (ECI) opted for a cautious, if lenient, approach that allowed social media companies to develop a “voluntary code of ethics.” The voluntary code aimed to bring transparency in paid political advertisements and place checks on violative content. With the Internet and Mobile Association of India (IAMAI) as the representative body, social media platforms including Facebook, WhatsApp, Twitter, Google, ShareChat and TikTok agreed to act on violations reported under Section 126 of the Representation of the People Act, 1951, within three hours of receiving complaints from the ECI. The time frame followed the recommendations of the Sinha Committee. During the national elections, social media platforms acted on 909 violative cases reported by the Election Commission (BBC Monitoring South Asia 2019). Social media companies also agreed to “provide a mechanism for political advertisers to submit pre-certified advertisements issued by Media Certification and Monitoring Committee” (ECI 2019a). Alongside these steps, IAMAI members promised to organise voter awareness campaigns. The ECI–IAMAI agreement was the first formal regulatory step to bring internet-based companies to agree on implementing a voluntary code. The code covered key aspects of internet speech regulation, including expeditious redressal of potentially violative content, transparency in political advertisements, capacity building for nodal officers in reporting harmful content, public awareness, and coordination between social media platforms and the ECI. According to the ECI, Facebook, Twitter, WhatsApp and other social media companies have agreed to adhere to this code in all future elections, including the Maharashtra and Haryana assembly polls (ECI 2019b). The self-regulatory code is likely to remain a common feature of election-related regulatory process in the coming years. Without doubt, self-regulatory mechanisms have several merits. They can prevent regulatory overreach and political misuse of existing provisions. For instance, Germany has introduced new regulations to punitively enforce social media companies to remove content flagged as hate speech. These drastic measures have invited criticism that penalties are decided without “prior determination of the legality of the content at issue by court” (Article 19 2017: 2). Concerns have been raised that such unilateral actions could set a bad precedent for countries where guarantees to political freedom are not secure. While the self-regulatory code appears to be a good solution in the context of actual and potential misuse of regulatory power, the question remains whether the voluntary code is sufficient to realise the stated regulatory objectives of containing harmful content and stemming opaque sources of political advertising. A telling detail in the Indian case is that the IAMAI continues to act as a liaison between the ECI and social media companies. Social media companies have secured the buffer of an association to agree to a voluntary code. The looming question is whether such double distancing—first from being direct parties and second from enforceable obligation—can bring about the desired changes. ISSN (Online) - 2349-8846 The fate of the Codes of Ethics and Broadcasting Standards in commercial news television is a sobering reminder of the limitations of self-regulation (Seshu 2018). Mechanisms of peer surveillance and industry-evolved guidelines, in this case, have failed to ensure uniform compliance. In 2009, the News Broadcasters Association (NBA), a professional association for private news broadcasters, drew a code of ethics and set up the News Broadcasting Standards Disputes Redressal Authority. The industry-wide response was prompted by governmental attempts to make direct regulatory interventions in content. Since its inception, the NBA has advocated for stronger and more uniform application of the code of ethics across television channels. However, the Hoot’s study in 2012 revealed that the NBA “did not take ‘strong punitive action’ against the channels that violated their guidelines” (Akoijam 2012). A more recent report in the Hoot has confirmed that the trend has not been promising in the following years (Seshu 2018). Global trends have also suggested that the self-regulatory model bears the risk of fragmentation and lack of legitimacy. How then would this work for even more volatile field of digital social media and messenger services? An effective co-regulatory model is much needed in ensuring