ISSN (Online) - 2349-8846

Digital Disinformation and Election Integrity: Benchmarks for Regulation

SAHANA UDUPA

Sahana Udupa ([email protected]) is Professor of Media Anthropology at the Ludwig- Maximilians-Universität (LMU) Munich, Germany. Vol. 54, Issue No. 51, 28 Dec, 2019

As critical events in democratic life, elections pose extraordinary challenges to the autonomy of public opinion. This article outlines some of the regulatory challenges that have emerged in the wake of digital media expansion in , and argues that the self-regulatory mechanism that was developed during the 2019 national elections is insufficient in addressing problems of online extreme speech, algorithmic bias, and proxy campaigns. Building on the electoral management model proposed by Netina Tan, it suggests that a critical overview of ongoing efforts can help determine the readiness of Indian regulatory structures to respond to digital disruptions during elections and emphasises the need for a co-regulatory mechanism.

It is commonplace to acknowledge that political parties and politicians intensify their efforts to influence public mood and voter loyalties during elections. Democracies, then, not only become a theatre for maverick speech and public performances, but also a testing ground for regulatory interventions. The expansion of digital media in India in the last decade has placed new pressures on regulatory efforts at containing malicious rumours and disinformation during elections. These tensions reflect similar developments around digital social media and electoral processes around the world. Globally, digital campaigns have raised concerns around data privacy and microtargeting, as well as the use of bots and ISSN (Online) - 2349-8846

algorithmic sorting as new ways to sabotage political discourse (Ong and Cabanes 2018; Bradshaw and Howard 2017).

The 2019 general elections in India exposed several limits and loopholes in the existing regulatory structures around media-enabled campaigns. During the elections, digital social media and messaging services emerged as a battleground for political parties to experiment with new tactics of content creation and distribution. Building on years of preparation, the Bharatiya Janata Party (BJP) was at the forefront in organising novel ways of creating and distributing election content. The party continued to rely on its office bearers, proxy workers, and volunteers to navigate different levels of content veracity and creative messaging. Multiple strategies of content creation were at work: from straightforward “party line” slogans to deep message ambiguation where words mutate as they travel and accumulate sinister meanings within specific cultural and political contexts of reception. Innovations were also striking on the distribution side. If office bearers with designated roles as social media coordinators closely monitored the “official channel” of content flow from national to local levels, proxy workers and volunteers assembled vast networks of distribution based on personal connections and snowballing techniques. These networks were further augmented by the potential virality of fear-inducing and humour-laden extreme speech that targeted communities based on religion, caste and gender (Udupa 2019).

The BJP’s first-mover advantage in social media campaigning was challenged by other political parties during the run-up to the elections. Stepping up its efforts, the Indian National Congress (INC) re-energised several of its party units, including a dedicated “research team” to prepare “counters” to the BJP and other parties. Full-fledged social media teams of the Congress and regional political parties got on to the same game of composing witty, satirical, and retaliatory messages.

Alongside party-based efforts, individual politicians increasingly recruited social media campaigners for online promotions. It was common to witness social media strategists accompanying politicians during campaign visits for ward-level mobilisation. These strategists ranged from a single individual who would follow the leader with a camera to upload the video the very next minute on , YouTube, and Facebook to small- and mid- sized enterprises that had paid teams working on social media promotions. Media reports also exposed clandestine operations of proxy companies that created toxic digital campaign content aimed against religious minorities and opposition party leaders (Poonam and Bansal 2019). Even as Facebook, WhatsApp and Twitter came under the radar for election content volatilities, TikTok, ShareChat, Helo and other mid-range platforms started providing new means to share political content and peddle partisan positions.

The vast complexity of content creation and distribution channels, together with the speed of circulation in the digital age, placed enormous demands on regulatory mechanisms during the national elections. How did the regulatory system respond, and what were the limitations? ISSN (Online) - 2349-8846

Voluntary Code of Ethics

The Election Commission of India (ECI) opted for a cautious, if lenient, approach that allowed social media companies to develop a “voluntary code of ethics.” The voluntary code aimed to bring transparency in paid political advertisements and place checks on violative content. With the and Mobile Association of India (IAMAI) as the representative body, social media platforms including Facebook, WhatsApp, Twitter, Google, ShareChat and TikTok agreed to act on violations reported under Section 126 of the Representation of the People Act, 1951, within three hours of receiving complaints from the ECI. The time frame followed the recommendations of the Sinha Committee. During the national elections, social media platforms acted on 909 violative cases reported by the Election Commission (BBC Monitoring South Asia 2019). Social media companies also agreed to “provide a mechanism for political advertisers to submit pre-certified advertisements issued by Media Certification and Monitoring Committee” (ECI 2019a). Alongside these steps, IAMAI members promised to organise voter awareness campaigns.

The ECI–IAMAI agreement was the first formal regulatory step to bring internet-based companies to agree on implementing a voluntary code. The code covered key aspects of internet speech regulation, including expeditious redressal of potentially violative content, transparency in political advertisements, capacity building for nodal officers in reporting harmful content, public awareness, and coordination between social media platforms and the ECI. According to the ECI, Facebook, Twitter, WhatsApp and other social media companies have agreed to adhere to this code in all future elections, including the Maharashtra and Haryana assembly polls (ECI 2019b).

The self-regulatory code is likely to remain a common feature of election-related regulatory process in the coming years. Without doubt, self-regulatory mechanisms have several merits. They can prevent regulatory overreach and political misuse of existing provisions. For instance, Germany has introduced new regulations to punitively enforce social media companies to remove content flagged as hate speech. These drastic measures have invited criticism that penalties are decided without “prior determination of the legality of the content at issue by court” (Article 19 2017: 2). Concerns have been raised that such unilateral actions could set a bad precedent for countries where guarantees to political freedom are not secure.

While the self-regulatory code appears to be a good solution in the context of actual and potential misuse of regulatory power, the question remains whether the voluntary code is sufficient to realise the stated regulatory objectives of containing harmful content and stemming opaque sources of political advertising. A telling detail in the Indian case is that the IAMAI continues to act as a liaison between the ECI and social media companies. Social media companies have secured the buffer of an association to agree to a voluntary code. The looming question is whether such double distancing—first from being direct parties and second from enforceable obligation—can bring about the desired changes. ISSN (Online) - 2349-8846

The fate of the Codes of Ethics and Broadcasting Standards in commercial news television is a sobering reminder of the limitations of self-regulation (Seshu 2018). Mechanisms of peer surveillance and industry-evolved guidelines, in this case, have failed to ensure uniform compliance. In 2009, the News Broadcasters Association (NBA), a professional association for private news broadcasters, drew a code of ethics and set up the News Broadcasting Standards Disputes Redressal Authority. The industry-wide response was prompted by governmental attempts to make direct regulatory interventions in content. Since its inception, the NBA has advocated for stronger and more uniform application of the code of ethics across television channels. However, the Hoot’s study in 2012 revealed that the NBA “did not take ‘strong punitive action’ against the channels that violated their guidelines” (Akoijam 2012). A more recent report in the Hoot has confirmed that the trend has not been promising in the following years (Seshu 2018). Global trends have also suggested that the self-regulatory model bears the risk of fragmentation and lack of legitimacy. How then would this work for even more volatile field of digital social media and messenger services?

An effective co-regulatory model is much needed in ensuring regulatory oversight for in- built incentives evolved by the industry. For instance, in the broadcasting sector, co- regulatory models in many European countries retain the state’s regulatory power through certification of code, while allowing sufficient institutional space for the industry associations to administer and monitor regulation. Such measures should extend to entities like IAMAI so that incentives and guidelines developed by the industry are linked to autonomous public statutory bodies with well-defined processes for escalation of complaints and publication of findings. Institutional Measures

Developing a co-regulatory code is an important first step. As Sharma (2019) has commented,

a top-down imposition of statutory regulations or a checklist of dos-and-don’ts on political parties is unlikely to be a plausible solution on its own. The costs of regulation are too high, the political will for enforcement is too low, and the possibility of loopholes too many. Instead, regulations placed on how political parties use digital media needs to be seen as part of a wider gamut of other urgent reforms and regulations—such as those pertaining to political financing and creating a legal framework for political consulting firms.

A co-regulatory code set within broad-ranging institutional reforms can facilitate urgent actions required by social media companies. These include transparency in resources allocated for content screening and content moderation algorithms, and implementing practices such as inviting public feedback to the training material for content moderation and offering data access to research (Hickok and Udupa 2019). These measures are ISSN (Online) - 2349-8846

important because not only human actors, but also automated lurkers and sorters of artificial intelligence systems have emerged as a new challenge to the legitimacy of political discourse.

In a useful study, Tan (2019) has proposed “electoral management digital readiness index” (EMDR) to assess the capacity of electoral management bodies to “respond to digital disruptions.” The criteria include “the type of electoral management model; presence of specific or new regulations governing online campaign and disinformation; confidence in the rule of law, and technological readiness of the digital economy.” In recent years, the Indian government has tried several measures, including guidelines on fake news. In 2018, the Ministry of Information and Broadcasting issued what was seen as a sweeping order to take action against journalists accused of spreading fake news (Business Today 2019). Regional governments have not been silent. The West Bengal government, for instance, has strengthened existing laws to act against citizens who spread misinformation and cause fear among the public. The government has also been actively “preparing a database of fake news stories distributed on social media over the past few years and … kept records of past offenders” (Funke and Flamini nd). Alongside the voluntary code of ethics for social media companies, the Indian state has announced the policy of #AIforAll, which addresses, among other issues, the growing concern over algorithmic bias (NITI Aayog 2018: 85). Last year the Ministry of Electronics and Information Technology released draft changes that would require internet intermediaries and messaging apps to trace originators of messages and provide this information to authorised government agencies (Business Today 2019).

The multifarious efforts that are now put into action should be examined to assess the EMDR index for India. Moreover, close scrutiny should follow in terms of determining which measures are directed at squashing political dissent. Freedom House has reported that India had the highest number of internet shutdowns globally in 2018 (Shahbaz 2018). Non- profit organisation Access Now (2018) reported 134 internet shutdowns in India in 2018, which was the highest in the world (followed by Pakistan at 12 instances, and Yemen and Iraq at seven instances each). In a recent study, Rydzak (2019: 4) has argued that although social media and digital platforms are “not critical to collective action in India,” they “are readily employed as methods of coordination, and removing them can turn a predictable situation into one that is highly volatile, violent and chaotic.” These reports have revealed that internet shutdowns have been disproportionately high in Jammu and Kashmir. Such drastic steps indicate that what is enforced in the name of public interest action against digital disinformation might serve just the opposite ends—of squashing political dissent and delimiting democratic participation.

Globally, the Cambridge Analytica case has been a watershed in exposing how social media and data analytics companies manipulate public life (Howard 2018). However, the trends are growing and gaining new dimensions, including algorithmic manipulation, internet shutdowns and consolidation in content distribution in countries like India. An effective regulatory response would have to go beyond election time fixes and attend to deeper ISSN (Online) - 2349-8846

problems of carriage and content in the era of “functionally unbundled” digital communication where communication functions are distributed across different platforms through interoperable standards (Goldman and Chen 2010; Udupa, 2012). These include enabling multicasting technologies that could benefit all content providers instead of proprietary forms of content-catching benefiting capital-heavy private players; fair competition among digital content carriers and distributors to avoid trends of vertical and horizontal concentration; and a content policy that embraces the strategy of eight Ds to prevent online harms—deletion, demotion, disclosure, dilution, delay, diversion, deterrence, and digital literacy (Persily 2018). At the same time, public scrutiny on governmental actions, such as sweeping regulations against fake news and internet shutdowns, is a dire necessity. These thoroughgoing efforts need steady cooperation from social media companies as well as civil society pressure to hold corporates and governments to account.

References:

Access Now (2018): “The State of Internet Shutdowns Around the World: The 2018 #Keepiton Report,” https://www.accessnow.org/cms/assets/uploads/2019/06/KIO-Report-final.pd...

Akoijam, Indira (2012): “Kid Gloves Regulation–Part 1,” Hoot, 10 February, http://asu.thehoot.org/research/books/kid-gloves-regulation-part-1-5752.

Article 19 (2017): "Germany: The Act to Improve Enforcement of the Law in Social Networks," August, https://www.article19.org/wp-content/uploads/2017/09/170901-Legal-Analysis-German-Netz DG-Act.pdf.

BBC Monitoring South Asia (2019): “Facebook, Twitter to Follow Ethics Code - India Poll Body,” 26 September, Retrieved from LexisNexis.

Bradshaw, Samantha, and Philip N Howard (2017): “Troops, Trolls, and Troublemakers: A Global Inventory of Organized Social Media Manipulation,” Working Paper 2017, 12, Computational Propaganda Project, University of Oxford, https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/07/Troops-...

Business Today (2019): “Govt Planning to Amend IT Act to Crack down on Apps, Sites Unable to Curb Fake News,” 2 January, https://www.businesstoday.in/top-story/govt-planning-to-amend-it-act-to-.... ISSN (Online) - 2349-8846

ECI (2019a): “Social Media Platforms Present “Voluntary Code of Ethics for the 2019 General Elections,” Election Commission of India, No ECI/PN/33/2019, 20 March, https://eci.gov.in/files/file/9467-social-media-platforms-present-volunt...

ECI (2019b): “Press Note: 'Voluntary Code of Ethics' by social media platforms to be observed in the general election to the Haryana and Maharashtra Legislative Assemblies and all future elections," No ECI/PN/91/2019, Election Commission of India, 26 September, https://eci.gov.in/files/file/10659-“voluntary-code-of-ethics”-by-social-media- platforms-to-be-observed-in-the-general-election-to-the-haryana-maharashtra-legislative- assemblies-and-all-future-elections/.

Funke, Daniel and Daniela Flamini (nd): “A Guide to Anti-Misinformation Actions around the World,” Poynter, https://www.poynter.org/ifcn/anti-misinformation-actions/#india.

Hickok, Elonnai and Sahana Udupa (2019). Complex Challenge of Extreme Speech Online can only be Tackled if Multiple Stakeholders Collaborate," Scroll.in, 21 June, https://scroll.in/article/922001/complex-challenge-of-extreme-speech-onl...

Goldman, Ellen and Anne H Chen (2010): “Modelling Policy for New Public Service Media Networks,” Harvard Journal of Law and Technology, Vol 24, No 1, pp 111–70.

Howard, Philip (2018): “Democratizing Data,” The Computational Propaganda Project, https://comprop.oii.ox.ac.uk/research/public-scholarship/fp-democratizin....

NITI Aayog (2018): “National Strategy for Artificial Intelligence #AIforAll.” https://niti.gov.in/writereaddata/files/document_publication/NationalStr....

Ong, Jonathan Corpus and Jason V Cabanes (2018): “Architects of Networked Disinformation: Behind the Scenes of Troll Accounts and Fake News Production in the Philippines,” Newton Tech4Dev Network, https://newtontechfordev.com/wp-content/uploads/2018/02/ARCHITECTS-OF-NE... .

Persily, Nathaniel (2018): “Elections and Democracy in the Digital Age,” Kofi Annan Foundation, https://storage.googleapis.com/kofiannanfoundation.org/2019/02/a6112278-....

Poonam, Snigdha and Samarth Bansal (2019): “Misinformation Is Endangering India’s Election,” Atlantic, 1 April, https://www.theatlantic.com/international/archive/2019/04/india-misinfor...

Rydzak, Jan (2019): “Of Blackouts and Bandhs: The Strategy and Structure of Disconnected Protest in India,” 7 February, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3330413.

Tan, Netina (2019): “Electoral Management of Digital Campaigns and Disinformation in East and Southeast Asia,” conference presentation, Disinformation and Elections in East ISSN (Online) - 2349-8846

and South East Asia: Digital Futures and Fragile Democracies, Columbia University, USA, 3–4 October.

Seshu, Geeta (2018): “Broadcasting Self-Regulation: An Unattainable Goal?” Hoot, 25 March, http://asu.thehoot.org/media-watch/law-and-policy/broadcasting-self-regu...

Shahbaz, Adrian (2018): “Fake News, Data Collection, and the Challenge to Democracy,” Freedom on the Net 2018, https://freedomhouse.org/report/freedom-net/freedom-net-2018/rise-digita...

Sharma, Amogh Dhar (2019): "How Far can Political Parties in India be Made Accountable for their Digital Propaganda?" Scroll.in, May 10, https://scroll.in/article/921340/how-far-can-political-parties-in-india-...

Udupa, Sahana (2012): “Beyond Acquiescence and Surveillance: New Directions for Media Regulation,” Economic & Political Weekly, Vol 47, No 4, pp 101–09.

—(2019): “Nationalism in the Digital Age: Fun as a Metapractice of Extreme Speech,” International Journal of Communication, Vol 13, pp 3143–63.