<<

Fighting Fake

The Front Line The Battle for Truth

5th October 2018 Summary The purpose of this report is to help bring a little clarity into the complex, confusing and fast-changing field of ‘fake news’ and disinformation and outline a few of the things that now need to be done to tackle what is a growing and highly cynical assault on truth. The Threat With advances in IT and artificial intelligence, and the evolution of social media, ‘fake news’ / disinformation has become a serious threat to peace and stability in the world. It confuses and misleads the public and contributes to a breakdown in public trust; it destroys reputations, sometimes lives; and it incites fear and anxiety, which undermines social cohesion, democracy and the rule of law. ‘Fake news’ is highly ‘sticky’; it can be generated anonymously and spread at very little cost or risk to the perpetrator; and dealing with the consequences of an attack can be debilitating and all too often ineffective. Moreover, the fact that we struggle to distinguish real from fake (and often fact from opinion) not only makes us vulnerable to deception but complicit in spreading fake information. The fact that ‘likes’ can now be cynically inflated on social media, and reviews manipulated, also promotes and amplifies ill-informed voices and crackpot ideas and creates a false sense of popularity, momentum or relevance — and often the dangerous feeling that our views are more widely-held than they are. Purging the Web and social media of bogus, phoney or extremist material is a colossal task: ‘fake news’ often contains factually correct elements; moreover, political satire, altered images and encryption complicate detection and assessment. Platforms are struggling to monitor the tsunami of text, images and videos posted on their platforms each minute in hundreds of languages and dialects. The Tech Giants have accumulated huge volumes of highly personal data on our patterns of behaviour, attitudes, preferences and ‘friends’, and exploit this information, sometimes to our detriment. Their business model means that social intercourse is in effect overseen by third parties who seek to manipulate the participants; and the availability of ‘free’ services has undermined mainstream media outlets and put hundreds of newspapers out of business. The practice by some politicians of dismissing criticism as ‘fake news’ and attacking journalists as ‘enemies of the people’ damages democracy and public trust and puts reporters in danger; whilst denialists, anti-vaxxers and the like diminish society’s ability to tackle serious threats to public health and the environment. Hostile states and extremist groups are also using disinformation to subvert and destabilise society, helped increasingly by psychometric profiling and micro-targeting. This is war by any other name, fought with disinformation rather than bayonets, bullets and bombs. The Developing Response Many organisations are now working to tackle the threat posed by disinformation — some are pursuing structural changes aimed at identifying and taking down fake or hateful material; some seek to improve the quality of information in the public sphere and its reach and accessibility; and some are focused on empowering individuals to become better able to recognise false stories / clickbait. However, efforts to tackle ‘The Problem’ are still poorly monitored and coordinated. Dozens of books and reports on ‘fake news’, ‘post-truth’, social media, hacking and information warfare have been published in the last 18 months and thousands of articles; and there are now many specialist website, blogs and newsletters. Moreover, a great deal of advice is also circulating online about how to spot and deal with fake information — this includes over 150 fact-checking websites, and a plethora of other interesting initiatives. Some of the key ones are listed in the report along with 25 actual referenced examples of ‘fake news’. Public awareness of the threat is growing — one measure of the concern is the record numbers of private citizens donating money to reputable news channels in appreciation of their work. And some countries have introduced legislation to try to control ‘fake news’ and/or hate speech, and the EU has just released a Code of Practice on Disinformation (which the Tech Giants have signed up to). However serious concerns have also been raised about the chilling effect all this is having on freedom of expression. This is unquestionably progress, but it is hardly sufficient given the size of the challenge and the unthinkable consequences of failure. What is needed now is: a) major national programmes of public education; b) the updating of international treaties and conventions that underlie global security (to make them fit for the digital age), with meaningful sanctions available to discourage nation states from continuing to mount cyberattacks on rivals and spread disinformation; c) national and international legislation to control the power of the Tech Giants that has real bite and mechanisms to enforce it; and d) resolution of some difficult conundrums to see that free speech and democratic accountability are protected — who should decide what is fake? and who should have powers to take it down? Not easy. Critical Information Critical Information is a citizen’s initiative. The intention is not to try to compete with the growing number of (often powerful and well-resourced) organisations that are now fighting fake, rather to help explain, contextualise and publicise their work (which is not currently well-understood). Critical Information also helps identify resources suitable for concerned teachers and local activists and is lobbying for change. Fighting Fake: The Battle for Truth ‘Fake news’, ‘alternative facts’, lies, deception, and dogma presented as ‘fact’, represent a serious threat to peace and stability in the world. These imposters (and the often-sinister forces behind them) confuse and mislead the public and contribute to mistrust in government and mainstream organisations; they damage individuals and businesses,1 destroy reputations, sometimes lives; and they incite fear and anxiety, which undermines social cohesion, democracy and the rule of law. Fake information2 is not a new phenomenon but advances in IT and artificial intelligence, and the growth and spread of social media, have turned it into a game-changer. Much damage has already been done, and we can expect worse with facial mapping and voice simulating software, which is now being used to produce ‘deep fake’ videos which are increasingly difficult to tell from real. 1 The Threat No one can stop fake information, but if we are to stand any chance of managing its malign and insidious effects we need to have a clear understanding of the nature of the threat. There is not one problem, there are many, and each requires a different line of attack. Here’s an overview of some of the issues that people are currently focused on and factors that need to be considered: • Much ‘fake news’ is (by design) unkind, unpleasant or shocking. This makes it very ‘sticky’ — a major study in March 2018 showed that it is disseminated much more widely on social media than real news; and like rumour and conspiracy theory, it can be generated anonymously and spread at very little cost or risk to the perpetrator.3 • Fake information is manufactured and spread by a multiplicity of malefactors and ‘useful idiots’, ranging from disturbed individuals on their smart phones in their bedrooms, through fraudsters, clickbait merchants and opportunists, to fundamentalists like ISIS and hostile foreign powers seeking to spread fear and confusion and undermine democratic government. And much of the toxic dialogue takes place on social media. • Social media platforms are corrupted by fake accounts, and by bots and cyborgs which can inflate ‘likes’ and retweets, or manipulate reviews.4 This not only misleads users but also promotes and amplifies ill-informed voices and marginal or crackpot ideas and creates a false sense of popularity, momentum or relevance — we think our opinion is shared by a much bigger proportion of the population than is actually the case. (This is one of the novel problems inherent to social media.) It can also build shared outrage and help propagate disinformation and hate speech.5 • A wide variety of grey6 and underground services are now available on the web that facilitate fake information and ‘cognitive hacking’ (modifying users’ perception), from comment or ‘like’ purchasing and ‘click farms’ (that can carry out a range of tasks automatically, including selling social media followers), to content takedown (removing to order damaging or embarrassing publicity, articles or whole websites), vote manipulation (that can modify online polls or voting patterns), and even ‘Breaking News Generators’.7 • In recent years we have retreated into our ‘echo-chambers’ in response to the sheer volume of unregulated (and frequently distressing) information invading our private space. In the ‘chamber’ our news is filtered and our views and opinions reinforced by our circle of friends (and often go unchallenged).8 The fact that we struggle to distinguish real from fake (and fact from opinion) makes us vulnerable to deception, and we become complicit in spreading fake information. • Everyone is potentially at risk from fake information — take, for example, the Indian restaurant in London hit by malicious rumours of human flesh being served in curries (see Annex); or the CEO (shown here9) reported killed in a car crash; or the NGO working in an unstable country where a doctored quote or image puts staff and programmes at risk.10 What’s more, dealing with the consequences of an attack can be highly debilitating and time-consuming, and all too often ineffective (‘mud’ sticks). • The development of the ‘Internet of Things’ is another cause for concern: people are already saying that it is ‘built to leak’ and that many of the goods and devices intended for our ‘smart’ homes and cities are not being engineered with adequate safeguards. This puts our safety, privacy and property at risk. Senior company executives have also expressed their concerns. • Up until recently the Tech Giants had shown themselves unable or unwilling to purge their platforms of bogus web sites or phoney, extremist or illegal material.11 To be fair, this isn’t an easy task because items posted on-line usually contain some factual elements, even if they are misleadingly or mischievously framed; and then there’s political satire and altered images (which are particular problematic to handle); and encryption (used e.g. by popular apps like WhatsApp), which means that operators (and regulators) cannot see what’s being posted and have to rely on spotting unusual behaviour. Moreover, companies are having to monitor hundreds of languages and take account of local expressions / subtle nuances of meaning. This is a colossal challenge. Fighting Fake: the Battle for Truth Page: 2

• The Tech Giants have accumulated huge volumes of highly personal data, which they have “strip-mined for profit”. They know better than anyone our patterns of behaviour, attitudes and preferences, and our ‘friends’ on social media. What’s more, they have shown themselves to be far too casual in securing our data.12 For years Facebook failed to anticipate problems and experienced several major data security breaches. It has been described as “a weapon of mass influence” and as both the “most democratic and the least democratic of systems.” • From the user’s perspective, there are fundamental flaws in the Tech Giants’ business models: social intercourse is in effect being overseen by third parties who seek to manipulate the participants. The fact that Facebook and the like don’t pay their fair share of the tax burden simply compounds the insult. What’s more, the availability of ‘free’ services has broken the business model of mainstream media and helped put newspapers and journalists out of business (the very institutions/people who help us understand and contextualise the news).13 • Some countries have introduced legislation to try to control what appears on social media platforms (Germany was one of the first with its ‘NetzDG’ law on hate speech), but there are growing concerns about the unintended consequences, most notably the chilling effect on freedom of expression. Germany may revise NetzDG after complaints that the fear of hefty fines had prompted internet firms to err on the side of caution and block more content than necessary, including satirical comment. • The practice of dismissing criticism as ‘fake news’ and attacking journalists as ‘enemies of the people’ is now commonly used by vulgarian politicians and dictators to divert attention from awkward facts or embarrassing situations, but it damages democracy and public trust and puts journalists in danger.14 Sound bites and opinion have replaced accepted ‘facts’ and reasoned debate.15 Similar issues are associated with denialists and con artists pushing alternative therapies: by generating hostility to science and mainstream medicine, their behaviour restricts society’s ability to tackle serious threats to public health (like measles) and the environment (not least climate change). • There have been many examples of politicians and special interest groups using simplistic arguments when addressing complex technical, socio-economic or environmental issues which can mislead people. And this tactic is difficult to counter because diehard proponents cry foul when claims are challenged (“d’you think the public is stupid?”) This is a sure way to close down discussion. Indeed, arguing can prove counter-productive (the so-called ‘backfire effect’).16 • Hostile states and extremist groups are using hacking and disinformation to subvert and destabilise society,17 and sometimes rewrite history (for example using fake or partial fact- checking sites18). Their job has been made much easier and more effective by the development of psychometric profiling, which enables specific groups to be targeted with personalised information designed to influence how (and even whether) they vote.19 • Data hacking and selective leaking can be very effective tools for public opinion manipulation: in the eyes of the general population the very fact a leak occurred validates ALL of the leaked data as legitimate regardless of whether this is true; and the victim will have a hard time proving that tampering occurred, let alone which documents were modified. Their credibility will be all but shot. Today we are living in the ‘Information Technology Age’ where war is fought by information as much, and possibly more than by bayonets, bullets and bombs. It is no longer a matter of whose army wins, but whose story wins (to quote Joseph Nye). And the Kremlin has shown itself to have some of the best story tellers in the business, including those operating from the Internet Research Agency (‘Troll Factory’) in St. Petersburg. Moreover, these days nothing is out-of-bounds in this new and dirty war — we have already seen incendiary fake reports of a Russian teenager abducted and raped by immigrants in Germany; babies crucified in ; and most recently, coordinated efforts in the to erode public confidence in vaccination (which will lead to children dying).20 2 The Developing Response to Disinformation Many organisations are now working to tackle the monster created by deception and disinformation: some are pursuing structural changes aimed at identifying and taking down fake or hateful material and prosecuting the platforms that carry it (and perpetrators, where they can be traced); some are seeking to improve the quality of information in the public sphere, and its reach and accessibility; and some are working to empower individuals (especially the young) to become better able to recognise false stories and clickbait, and less likely to fall for it. Educational establishments are actively promoting critical thinking and media literacy in response to the threat. NGOs and community-based organisations are working to safeguard children online and Fighting Fake: the Battle for Truth Page: 3 protect people’s privacy. Fact-checking teams are scrutinising the news and assertions of public figures. News agencies are sending journalists into schools to run classes in media literacy and teach children to detect BS (bullsh*t); they are also running competitions to promote quality journalism. Consultants are training companies in cyber-security / ethical hacking. Legislators and civil servants are attempting to regulate and police deceit, bigotry and hate speech online. Security services are tracking malefactors and working to protect vital infrastructure from attack.21 International agencies are involved in digital forensics and debunking disinformation and malicious propaganda; and the social media giants, which for so long brushed the problem under the carpet, are now putting more resources into taking down fake accounts and toxic and inflammatory material, supporting fact-checking initiatives, and mounting publicity campaigns to warn of the dangers. [These frontline services are shown on the report cover.] That said, efforts to tackle ‘The Problem’ are as yet poorly coordinated, which is hardly surprising given the wide range and type of organisation involved, the speed of developments, and for some, the necessity to maintain a low profile to avoid trolling, denial of service attacks and death threats. 3 Observations on Progress & Remaining Challenges Here are some personal observations on these various initiatives and the impact they are having on The Problem. We’ll start with the good news: • Leading internet pioneers have been speaking out about the way the web has been hijacked and used to corrupt public services, spread untruths and peddle hate. They include Vint Cerf (‘father of the internet’), Tim Berners-Lee (inventor of the Web), Steve Wozniak (Apple’s co-founder), and Jaron Lanier (virtual reality pioneer/guru). • Dozens of books and reports on ‘fake news’, ‘post-truth’, social media, hacking and information warfare have been published in the last 18 months (well over 80 titles in English alone)22 and thousands of articles, and many specialist blogs, newsletters, Twitter feeds and Facebook pages are now available (see e.g. EUvsDisinfo, Debunking Denialism, DFRLab and ). • There is also a great deal of advice circulating online for spotting fake / not blindly passing on suspect material, and a growing number of tools are available, for example to check the authenticity of images (reverse image searches) or ‘likes’, or ranking of goods and services. • Over 150 fact-checking initiatives are today operating in over 50 countries — as of today, 50 are verified signatories of the International Fact Checking Network. There are also schemes that flag suspect websites; and initiatives which identify clickbait (eg ‘StopClickbait’, which ‘does the clicking for you’). • Open source research and digital forensics has proved highly effective in unmasking disinformation — examples include the meticulous work of Bellingcat on the downing of Malaysia Airlines Flight 17, documenting the use of chemical weapons in Syria, and naming one of the Russian Scripal poisoning suspects. • At least two global networks of researchers, scholars and policy analysts have been established to provide high level help and advice on mis/disinformation;23 and an increasing number of international conferences are bringing together ‘Fake Fighters’/‘Truth Seekers’ to report on progress/compare notes, notably the Atlantic Council’s 360/Open Source, MisInfoCom, Fighting Abuse @Scale, and Global Fact Checking Summit. • One can also find a plethora of interesting new initiatives, some international, such as the recent establishment of a Global Disinformation Index (to “bring metrics to the World's polluted information ecosystem”) and the Transatlantic Forum on Disinformation, and some local, such as Disinformation Watch (on Malta); some focused on the media (such as News Integrity Initiative & Reframing ), others, on email scams or social media hoaxes (eg Hoax-Slayer.com); and initiatives on ethical hacking or internet security (e.g. the Internet Defense Prize). • A dozen or more countries have now set up units to counter the new waves of cyber-propaganda, as has the EU (East StratCom Task Force) and NATO (StratCom Center of Excellence); and the EU has just released a Code of Practice on Disinformation (which the Big Techs have signed24). • Around 30 countries have or are in the process of introducing legislation to tackle online misinformation. [Poynter provides a running guide.] • It is fair to say that, with all this activity, public awareness of the threat and its consequences ‘post-truth’ is growing. One indicator of this concern is that reputable news outlets like , New York Times and Washington Post are reporting record numbers of private citizens donating money in appreciation of their work. • And the General Data Protection Regulations, which came into force in the European Union in May 2018, now offer hope that service providers will amend their ways in the face of heavy fines if they are found negligent in their handling and use of EU nationals’ data. It is likely other countries will follow. Fighting Fake: the Battle for Truth Page: 4

Much More Required So this is a good start, but it is hardly sufficient given the size of the challenge and the unthinkable consequences of failure. There are also some difficult questions still to resolve, not least who decides what is fake/damaging, who has powers to take it down from different media, and how might free speech be affected? What is needed now is a major programme of public education in all countries, and national and international legislation that has real bite. We need to see more concerted action to: • clarify what’s going on and put developments in context (which is the purpose of this paper); promote critical thinking and media literacy so that • Prosecuting Liars & Fakers people are better able to recognise ‘clickbait’ and routinely use fact-checking websites (too few In Britain liars can be prosecuted under Section 127 of the 2003 Communications Act, which appreciate the importance of fact-checking); state that “a person is guilty of an offence if, for • encourage more informed public debate on the the purpose of causing annoyance, dangers of thoughtless retweeting of scurrilous or inconvenience or needless anxiety to another, he sends by means of a public electronic contentious material (people need to be made to communications network, a message that he feel more responsibility for their actions); knows to be false [or] causes such a message • put more pressure on social media platforms and to be sent.” On conviction he shall be liable “to imprisonment for a term not exceeding six internet service providers to be more transparent / months or to a fine not exceeding level 5 on the accountable and take down bogus, misleading or standard scale, or to both.” extremist websites more swiftly;25 ditto sites selling There are also provisions under the 1988 ‘likes’, and practices that lead to exaggerated Malicious Communications Act, which state that reviews of goods and services.26 any person who sends to another person a letter or other article which conveys “information • force the Tech Giants to modify the more addictive / which is false and known or believed to be false psychologically-harmful features of their smart by the sender” is guilty of an offence “if his phones and social media software (introducing purpose.. in sending it is that it should... cause licensing/regulation procedures, if necessary27) and distress or anxiety to the recipient...” requiring them to accept subscriptions;28 • protect printed media, quality journalism and the open web29 — and crack down on the intimidation of political candidates during elections; and require public interest broadcasters to desist from the practice of ‘false equivalence’;30 • introduce new legislation and specific measures to combat the threat posed by cyber attacks/malicious propaganda, and name and shame those involved;31 and • prosecute people who deliberately lie to or mislead others, or who post or retweet malicious or illegal material — laws are available in Britain (see box) but they have not thus far been used in the context of ‘fake news’ or people making bare-faced lies, such as promises of £350 million a week for the NHS.32 And finally, there is the question of bringing some of the international treaties and conventions that underlie global security into the digital age to take account of the existential threat now posed by ‘fake news’ and disinformation. This includes measures to sanction nation states that exploit the internet and social media to mount cyberattacks on rivals, or deliberately spread misinformation to confuse or ferment dissent. In June 2015 a UN Expert Group — which included the US, Russia, China and the UK — did recognise that the UN Charter should apply in its entirety to cyberspace,33 and there have been efforts since then to update various articles including, for example, the right to privacy in the digital age.34 Some of the above actions will be challenging, but inaction is no longer an option. End Note I have been concerned about ‘fake news’ for some time and in 2017 set up Critical Information to raise public awareness of the threat. I have spent most of my professional career championing the cause of accurate information, primarily within the NGO sector, and chair a local humanist group promoting reason and logic in civil discourse. I hate the very idea of ‘post truth’ and the advent of ‘alternative facts,’ because lies, deceit and false information undermine trust, the ‘glue’ that holds society together. Sam Harris is right when he says that lies are “the social equivalent of toxic waste;” and lies are slowly poisoning social intercourse, undermining democracy and damaging international relations. The intention behind Critical Information is not to try to compete with the growing number of organisations that are fighting fake, rather to help explain, contextualise and publicise their work, and identify resources suitable for local activists and teachers. Over the last 18 months I have been focused on understanding the problem, building a database (to handle the rapidly increasing flow of data), and a website (to display the findings), and also taking the message to local community groups;35 and I hope shortly to be engaging with schools and colleges. The work is political but not party-political, and whilst the initiative has humanist roots it aims to embrace and empower people of all faiths and none.36 I trust that Critical Information (and publications like this) help bring a little clarity Fighting Fake: the Battle for Truth Page: 5 into this complex and fast-changing field and make it apparent to the reader what we all now need to do to tackle this cynical and perilous assault on truth. This report is ‘work in progress’: I am making changes to the text as and when new information, ideas or publications emerge whilst trying to keep the paper to around the same overall length. I welcome comments and suggestions for improvement. Dr Mike Flood Annex: Examples of ‘Fake News’ ‘Fake news’ can involve stories that are true but mischievously or misleadingly presented, or they can be complete fabrications. They can be trivial / absurd (‘Freddy Star Ate my Hamster’), an April Fool’s prank (Spaghetti Harvest in Switzerland) or plain nasty (‘Rowan Atkinson killed in Car Crash’), or simply theatre (War of the Worlds Radio Drama); or they can involve politically explosive allegations (‘AIDS Virus Created by CIA’). The motive can range from having a laugh, making money or gaining advantage over rivals (‘Wildlife Photographer of the Year Disqualified’), to hiding a heinous crime (the Downing of Flight MH17 over Ukraine), or doing serious damage to a business, movement, political party or country. The examples listed below show just how diverse ‘fake news’ can be. They are in rough chronological order. • Project Azorian: In 1974, a large and highly unusual ship, the Hughes Glomar Explorer, set sail from Long Beach California heading for a location 1,500 miles NW of Hawaii where it was claimed it would scoop manganese nodules off the seabed. But this was a CIA cover story: its real mission was an audacious search for a lost Russian K-129 type nuclear submarine lying 3 miles down. Amazingly, the vessel’s giant steel claws did successfully seize the sub, but on the way up, the gear snapped, and the sub fell back to the seabed. • Letter Bomb Backfires [Nov 1994]: Iraqi terrorist, Khay Rahnajet, didn’t pay enough postage on a letter bomb. It came back with ‘return to sender’ stamped on it. Forgetting it was the bomb, he opened it and was blown to bits. • Woman Arrested for Defecating on Boss’ Desk after Winning the Lottery [Apr 2016]: This was biggest fake crime news hit on Facebook in 2016. The hoax generated more than 1.7 million shares, reactions and comments on Facebook. • Emirates Flight 521 Brought Down by Missile [Aug 2016]: After Yemen's Houthi rebels claimed they had attacked Dubai International Airport with a drone, fake visuals flooded social media. One was later shown to be dramatic footage of an Emirati flight that crash-landed at the airport in 2016 (and all on board survived). • President Obama banned reciting the Pledge of Allegiance in schools [Oct 2016]: This hoax published by ‘ABCNews.com.co,’ a fake site made to look like ABC News and generated more than 2.1 million shares, comments and reactions on Facebook in just two months. • Karri Twist Indian Restaurant ‘served human meat’ [May 2017]: The restaurant (in New Cross, South London) faced closure after a fake news story spread on social media accused staff of serving human meat to diners. • Starbucks Dreamer Day [Aug 2017]: A post on social media claimed that Starbucks was giving a deep discount to undocumented immigrants on a special day (11 Aug). The claim appears to have started on the infamous 4Chan website, apparently concocted as way to get “all illegals to go at once and demand free stuff” at a “liberal place.” Some commenters suggested calling Immigrations and Customs Enforcement... • Chicken Infected with Deadly Nipah Virus [May 2018]: A viral hoax declaring that 60% of the chicken supplies in the District of Kozhikode (in India) were infected with the Nipah virus spread like wildfire in Kannur. People stopped buying chicken, sales plummeted, and many sellers went out of business... [Nipah is deadly: it can be transmitted from animals to humans and there’s no cure.] • Smile! You’re on Russian State TV [Jun 2018]: A smile was apparently added to North Korean leader Kim Jong-un’s facial expression in a Sunday night show broadcast on 3 June by Russian state TV Rossiya 1. He was meeting with Foreign Minister Sergey Lavrov who visited Pyongyang on 31 May. • Alex Jones/InfoWars in court for defamation [Jul 2018]: Veronique De La Rosa and her husband lost their 6-year-old son Noah in the Sandy Hook shooting. They have had to relocate 7 times because of online harassment and death threats. They now live in a high-security community hundreds of miles from where their son is buried. • Buried Woman about to be Stoned in Iran [Aug 2018]: Multiple claims were made on social media that this photo shows a woman in Iran ‘about to be stoned’. It is actually a photo of 66-year-old Maria Gabriela Ruiz, one of 3 people who voluntarily remained buried up to their necks for several days in July 2003 as a protest over the Colombian Government’s failure to relocate 150 displaced persons to a safer part of the city of Cali. • Fake Photos in Burmese Military Report [Aug 2018]: A new book, written by the Myanmar military, uses fake photographs and chilling rewrites history in what appears to be an attempt to justify the killing of thousands of Rohingya. One photo was taken in Bangladesh, one in Tanzania and a third is falsely labelled as depicting Rohingya entering Myanmar from Bangladesh when in reality it shows them attempting to leave. Fighting Fake: the Battle for Truth Page: 6

End Notes 1 In business, fake information can misalign investments, reduce returns and defraud advertisers; and it can derail demand forecasts, inventory models and planning. 2 There has been much debate about terms like ‘fake news’ and ‘alternative facts’ — I have used ‘fake information’ in this paper as a coverall term. However you define it, the real problem lies with disinformation. The new (Sep 2018) EU Code of Practice on Disinformation has a useful definition of ‘disinformation’, which it describes as “verifiably false or misleading information” that is “created, presented and disseminated for economic gain or to deceive the public” and “may cause harm”. The harm includes “threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens’ health, the environment or security”. The definition does not include misleading advertising, reporting errors, satire and parody or “clearly identified partisan news and commentary”. 3 Anonymity on line is a complicated issue. Organizations like the Electronic Frontier Foundation, have long argued that anonymity is crucial for democracy because it allows marginalized voices to speak out without fear of retribution. It is also endorsed by the UN Human Rights Council. On the other hand, requiring verified authentication credentials before allowing people to set up websites or post online (as Facebook and Google+ do in their terms of service) makes it more difficult for criminals and nation states or their proxies to operate. It also facilitates tracking down fraudsters, liars and trouble-makers. With the severity of the problems now being associated with fake information it may be that the balance of argument is moving in favour of those who want to see credentials authenticated and names disclosed. 4 Different kinds of bots spread different kinds of misinformation. For example, so-called ‘content polluters’ may use anti- vaccine or racist messages as bait to entice people to click on advertisements and links to malicious websites. 5 Researchers at Warwick University have recently suggested that “social media can act as a propagation mechanism between online hate speech and real-life violent crime.” They came to this conclusion after examining more than 3,300 attacks against refugees in Germany over a two-year period. They found that towns where Facebook use was higher than average consistently experienced more attacks on refugees. 6 Grey (or gray) goods are products sold by a manufacturer or their authorized agent outside the terms of the agreement between the reseller and the manufacturer. 7 A disturbing report by TrendMicro details such underground services available in China, Russia, India and the Middle East. 8 Whether echo chambers cause polarisation, and polarisation is bad for democracy, are interesting questions. Alice Thwaite (who set up the Echo Chamber Club) has recently come to the conclusion that her previous presumptions of harm are “entirely wrong” — although she also notes that “this does not mean that social media platforms should not be designed in a way that facilitates better agreement, disagreement and access to information.” Indeed, this will be one of the aims when the Club is relaunched. 9 In June 2017, a news story (traced to the infamous ‘any-thing-goes’ 4chan website) claimed that the head of the cryptocurrency Ethereum. Buterin posted a selfie on Twitter to disprove the rumours (pictured in text), but by then >12% of Ethereum’s $4-billion market value had been wiped out. 10 The particular problems created by fake info in the NGO sector are outlined and explored in ‘faking it: fake news and how it impacts on the charity sector.’ 11 The Tech Giants don’t accept that they are de facto publishers and bear some responsibility for content and the consequences of material that they carry or encourage on their platforms. In the USA (where Facebook, YouTube, etc. originate) they have legal immunity because of ‘safe harbour’ provisions in the 1996 Communications Decency Act. 12 Some corporations have sold our data to others; and there have been numerous data breaches which have led to reputation damage (and potentially identity theft, fraud and blackmail). 13 Some media professionals have suggested that: “fake news is the best thing that has happened for decades” as “it gives mainstream quality journalism the opportunity to show that it has value based on expertise, ethics, engagement and experience.” It has been described as “a wake-up call to be more transparent, relevant, and to add value to people’s lives.” But it remains to be seen whether this incentive for change takes effect before the patient dies. 14 In August, over 350 news outlets in the US joined a Boston Globe campaign denouncing the president's ‘dirty war’ against the media, using the hashtag #EnemyOfNone. 15 Some commentators believe that actual fake news is on the decline and that the real threat now comes from “fake news as a political smear,” which according to Joseph Kahn [Ed. New York Times] is on the increase. Kahn made the comment in a fascinating panel discussion on 'fake news' hosted at Davos in Jan 2018. 16 Expect an especially strong reaction from conspiracy theorists, not least climate change deniers who see criticism as proof that ‘deep state’ forces are at work. This represents a massive challenge for democracy. 17 Russia is using ‘reflexive control’ to attack the West and discredit democracy — and feed patriotic fervour at home and promote national unity. 18 ‘Fact Check ’ is an example; and one must also wonder about RT’s ‘special project’, FakeCheck, which “helps you to separate fact from fiction and transit into a post-fake reality.” 19 It is interesting to speculate on how people living in autocratic countries are affected by ‘fake news’, whether generated abroad or broadcast over state-controlled media. Britain has a unit at GCHQ, the Joint Threat Research Intelligence Group, whose identify was revealed by Edward Snowden in 2014. Some of the JTRIG’s liberated material on ‘dirty tricks’ makes disturbing reading. It is somewhat ironic that in Russia (and China, Turkey and Indonesia), which use social media to monitor and control people and suppress free-speech, trust in government and the media has grown in recent years whilst it has declined significantly in many Western democracies (see Edelman’s ‘Trust Barometer’). 20 In Aug 2018 a study discovered several Twitter accounts, known to belong to Russian trolls, had distributed malicious content about vaccines to sow division before and during the American presidential election. It appears that the Fighting Fake: the Battle for Truth Page: 7

Russian trolls played both sides against each other, tweeting pro- and anti-vaccine content in a politically charged context. “Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination.” [In the US the rate of children not receiving vaccines for non-medical reasons is climbing.] 21 The security services also monitor activity on the Dark Web, something that is not addressed in this paper. 22 Here are 10 excellent examples from: Atlantic Council; Brookings; Chatham House; FullFact; Harvard Business Review; House of Commons; New York Times, RAND Corporation; TrendLabs & UNESCO. [If you think another report should be included, which of the above would you suggest we exchange it for and why?] 23 These initiatives are Global Experts on Debunking of Misinformation and DisinfoPortal.com. 24 The EU Code of Practice covers five areas of competence, with the commitment to disrupt advertising revenues from companies that spread disinformation, tackle fake accounts and online bots, make political advertising more transparent, allow users to report instances of disinformation more easily, and provide better frameworks to monitor the spread of disinformation. The move follows an Action Plan put forward by the Commission in April, which introduced the idea of platforms implementing self-regulatory tools to tackle the spread and impact of online disinformation. 25 Getting the Tech Giants to sign up to the EU Code of Practice on Disinformation is a good start, but doubts have already been expressed about the viability of the Code. The EU set up a Forum on Disinformation to help develop the basic idea. It comprised two different autonomous groups: one consisted of major online platforms and exchanges and their trade associations, advertisers and association agencies (called the ‘Working Group’), and the other, representatives of the media, civil society, fact checkers and academia (dubbed the ‘Sounding Board’). The latter has strongly criticised the Code of Practice saying it “contains no common approach, no meaningful commitments, no measurable objectives or KPIs [Key Performance Indicators], no compliance or enforcement tools and hence no possibility to monitor the implementation process.” Watch this space... 26 It took years for the platforms to closedown Alex Jones and InfoWars for spreading outrageous and hurtful fake conspiracy theories about the Sandy Hook shootings and other atrocities. (He has now been taken to court by several of his victims, who have been mercilessly pursued by some of Jones’ more extremist followers...) 27 Social media is highly addictive by design, and there are now well-founded concerns about how our addiction to the technology may be affecting our physical and mental health, and our ability to communicate and interact meaningfully with others face-to-face. The impact on children is of particular concern. 28 Some people would prefer to pay for services rather than have companies make money from gathering and selling data about them to advertisers, political parties, and the like. 29 In early 2018 Prime Minister May announced a review into Britain’s printed press (which will “look into funding models to ensure the continuation of high-quality national and regional journalism”). She warned at the time that the disappearance of hundreds of local titles was “dangerous for our democracy.” She also spoke of cracking down on the intimidation of political candidates on social media during elections. The Labour Party has called for a windfall tax to be levied on Tech Giants to “pay for public interest journalism”. It has also floated the idea of the likes of Google, Amazon, Facebook and internet providers paying a ‘digital licence fee’. Without greater investment in investigative, public interest journalism, it said, there is a risk that a "few tech giants and unaccountable billionaires will control huge swathes of our public space and debate" (a charge previously levelled at print media moguls). The Liberals have called for Google, Amazon and Facebook to be broken up, pointing out that Google now drives 89% of internet searches; 95% of young adults on the internet use a Facebook product, and that Amazon accounts for 75% of ebook sales, while Google and Apple combined provide 99% of mobile operating systems. Moreover, these firms’ sheer size, makes them “a barrier rather than a boon to entrepreneurship”. The inability of the authorities to force the Tech Giants to pay their fair share of tax is nothing short of scandalous. 30 False equivalence is about giving prominence to minority views (in the interest of ‘balance’) on topics such as vaccination and climate change, where there is an overwhelming body of scientific evidence to the contrary. In 2017 the BBC had to apologise for giving airtime to Lord Lawson’s denial of climate change following a barrage of complaints. It is too early to say whether this rap on the knuckles from viewers will change Auntie’s policy. 31 Bob Seely MP (who is on the Foreign Affairs Select Committee) has made some interesting proposals: he says we need to see what’s happening as “a global struggle for open society” and properly fund the BBC World Service (and dramatically increase the Russian service budget), and change our visa regime; and we need a permanent multi- agency group to expose harmful cyber activities; a foreign agents representation act (to name & shame those working as Putin proxies); and a counter propaganda bill. [BBC Radio 4’s Today Programme, 5 Oct 2018 at ~08:58.] 32 The claim on the Vote Leave Battle Bus was carefully-worded: "We send the EU £350 million a week let's fund our NHS instead," however the way it was presented led many to understand that on Brexit the NHS would be getting the money. 33 The UN Charter includes: a rule prohibiting interventions in the domestic affairs of states, and an Article [2(4)] which prohibits the threat or use of force against the territorial independence or political integrity of any state. Moreover, Article 51 recognises the inherent right of any state to take action in self-defence where cyber operations result in or present an imminent threat of death and destruction on an equivalent scale to an armed attack (recent comment by UK Attorney General, Jeremy Wright MP). 34 The UN Human Rights Council has affirmed that “the same rights that people have offline must also be protected online, including the right to privacy” and calls on States to “promote quality education and lifelong education opportunities for all to foster, inter alia, digital literacy and the technical skills required to effectively protect their privacy”. 35 The initial focus has been on local humanist groups (the plan is to move on to environment groups and groups involved in international development). One talk (to the Conway Hall Ethical Society) has just been published. 36 The various outreach programmes are described here on the Critical Information website.