<<

Brief: None of the Big Tech platforms effectively curbing Covid - as World confronts third wave1

Luca Nictora, Campaign Director at Avaaz said: "It's exhausting to keep saying this, but none of the tech platforms are doing enough to curb the toxic polluting their platforms. We urgently need a 'Paris Agreement for Disinformation'. But that requires the Big Tech platforms to agree on a strong Code of Practice that has meaningful commitments and measurable action. This is their last chance saloon - otherwise, get ready for regulation."

We’re at a crucial moment in the fight against disinformation. If we get a strong Code of Practice - one that's capable of being a blueprint for enforcement under the DSA - then the EU could lead a global reduction in disinformation, whilst protecting freedom of speech. But the EU needs the ambition of a Paris-style Agreement for disinformation, one that starts to treat disinformation like we treat CO2; measuring it, setting targets to reduce the level our societies are exposed to, and keeping Big Tech, which is responsible for accelerating disinformation through their algorithms, accountable for the levels on their platforms.

In light of that, this preliminary cross-platform research analyses Covid disinformation “emissions” based on a sample of available data from the platforms, and finds that:

- Facebook is the biggest “emitter” of Covid disinformation - with 68% of the total interactions on fact-checked covid disinformation we were able to document across the entire four platforms.

- Youtube is the worst of the four platforms when it comes to acting on content - failing to take action on 93% of the fact-checked content analyzed in the study, with Twitter also performly poorly, with 74% of content unactioned.

- Youtube and Twitter also need to step-up on labeling - analysis of our sample of

1 *Document as a whole is not for distribution or publication. If this research is used, Avaaz must be informed and can be cited as follows, “Preliminary research from global civic organization Avaaz shows/suggests.” Findings presented below may be updated as our investigation continues. After reviewing our reporting, the social media platform of concern may take moderation actions against the content described in this brief, such as fagging it or removing it from circulation.* covid disinformation found that they seem to solely focus on removal of content, which is problematic especially when dealing with political content.

Covid disinformation continues to pollute our ecosystem - and Facebook is the biggest “emitter” Avaaz analysed Covid disinformation “emissions” across Facebook, Instagram, Twitter and Youtube to identify which was the biggest “emitter” of fact-checked disinformation.

Facebook is the platform where we recorded the biggest amount of overall interactions on Covid disinformation - with an overwhelming 68% of interactions. This compares to 12% on Instagram, 11% on Twitter, and 9% on Youtube.

It should be noted that there are a number of factors that can influence the amount of emissions that can be measured through our . For example, the size of the platforms (Facebook has 2850 million monthly active users compared to 300 million monthly active users for Twitter). Also there’s the factor that some platforms have partnerships with fact checking organisations, and it is difficult to fact check videos compared to text based content. Tech platforms are still failing to extensively act on Covid disinformation The big four tech platforms are failing to act on 37% of the COVID-19 disinformation2 content sample studied in this research. This failure is taking place 18 months after the start of the pandemic and infodemic - and just over one year on from the start of the COVID-19 monitoring and reporting programme, which asks tech platforms to report on their efforts to combat the spread of covid disinformation on their platforms.

Again there are substantial differences in how each platform is performing. In this case, it is Youtube that took the least amount of action on the Covid Disinformation that we analysed:

2 Under COVID-19 disinformation, Avaaz includes content that could cause public harm by undermining public health and that impacts public health in the areas of: a) Preventing disease: e.g., false information on diseases, epidemics and pandemics and anti-vaccination , b) Prolonging life and promoting health: e.g., bogus cures and/or encouragement to discontinue recognised medical treatments c) Creating distrust in health institutions, health organisations, medical practice and their recommendations: e.g., false information implying that clinicians or governments are creating or hiding health risks d) : health-related misinformation that can induce and panic, e.g., misinformation stating that the coronavirus is a human-made bio-weapon being used against certain communities or that Chinese products may contain the virus. YouTube’s lack of action on Covid 19 disinformation content is expected given the platform’s reluctance to transparently define a more robust moderation policy. Although it is true that fact-checking videos is more difficult than fact-checking written posts or images, YouTube has had ample time to find solutions to these challenges. For example, the platform successfully found solutions to limit the spread of extremist terrorist content, proving that solutions are available if the platform decides to prioritise this issue.

When analysing the interactions on that unactioned content, Facebook again emerges as the biggest “emitter” and is responsible for 49% of the total interactions happening on unactioned content3, suggesting Facebook is likely the biggest “emitter” of the four platforms, compared to just 1% from Instagram. Centrally, however -- comparing each platforms’ emissions will require more transparency and a more strict auditing methodology, something EU regulation can put in place to better protect European citizens.

3 Avaaz defines “unactioned content” as content that is available on the platform in its original posting form, without the inclusion of a label such as a “False information note” or an overlay directing users to a fact check article. We consider the removal of content an action from the platform and such content would not be included in the ”unactioned content” category. There’s a big split between platforms that correct Covid disinformation and those who rely on takedowns Of the 63% of content that platforms took action on, over half (53%) was labelled, while a much smaller proportion (10%) was removed.

However, when comparing the platforms, it’s clear that there are big differences in the types of actions taken by each of the platforms - differences that have big implications for the issue of how to balance the fight against disinformation with respect for freedom of speech.

In the sample we analyzed Twitter and Youtube both overwhelmingly preferred to remove content rather than label it for their users. The measures taken by Twitter and Youtube are geared 100% towards removing the content from the platform. We did not find a single piece of content with a label or false information warning on either platform - this is despite the fact that while Youtube does not have a Covid disinformation labelling policy, Twitter does. The choice to favour removal of content instead of providing fact-checking labels can become a risk for freedom of speech.

In contrast, Facebook and Instagram have a robust fact-checking program are taking the labeling approach where readers are informed about a publication containing false content and invited to learn more about the reasons why such ratings were applied. Tech platforms are not treating disinformation ‘emissions’ in different languages equally Avaaz and others have repeatedly identified that Facebook has a blindspot when it comes to disinformation in some languages other than English. This blindspot is replicated across the other tech platforms.

According to our analysis:

- Nearly half (49%) of fact-checked misinformation content in major non-English European languages is not acted upon by the four tech platforms, compared to only 29% of English-language content.

- Italian speakers are least protected from misinformation, with measures lacking for 84% of Italian content examined. Portuguese is the next most neglected European language, with measures lacking for 62% of Portuguese content examined.

- Spanish and German speakers were most protected, with measures lacking for 20% and 21% of content, respectively. Data / Methodology Toplines

This briefing was designed to provide a preliminary investigation into the spread of Covid disinformation on different platforms, based on a representative sample analysis that can be replicated by other independent researchers. This preliminary analysis can only give a glimpse into the failure of platforms to act effectively to curb Covid disinformation. A broader more robust analysis will require more transparency and cooperation from social media platforms.

Here’s how this analysis was conducted:

● We documented 240 pieces of fact checked misinformation spreading on Facebook, Instagram, Twitter and Youtube, that overall had collected 2,890,033 interactions.

● All the content contained Covid-19 disinformation, rated as false or mostly false by IFCN or other reputable fact-checking organizations and fact-checked between January 5 and Jun 7, 2021.

● 105 posts (44% of the total sample) included narratives or identical claims spreading on more than one of the 4 platforms.

● 94% of the content we analysed (226 pieces) was posted in French, Spanish, Portuguse, German, Italian and English. ○ The 14 remaining pieces include content posted in the following languages PL, HUN, HR, SWE, NL, BG, RO, SR

● The total amount of pieces of content we identified starting from fact-checked articles were 156 for Facebook, 46 for Twitter, 27 for Instagram and 13 for Youtube.

ENDS