Home Affairs Committee Oral evidence: Online harms, HC 342

Wednesday 20 January 2021

Ordered by the House of Commons to be published on 20 January 2021.

Watch the meeting

Members present: Yvette Cooper (Chair); Ms Diane Abbott; Dehenna Davison; Ruth Edwards; Laura Farris; Simon Fell; Tim Loughton; Stuart C McDonald.

Digital, Culture, Media and Sport Committee member present: Julian Knight.

Questions 1 - 168 Witnesses

I: Theo Bertram, Director of Government Relations and Public Policy for Europe, TikTok; Nick Pickles, Global Head of Public Policy Strategy and Development, Twitter; Derek Slater, Global Director of Information Policy, Google; and Henry Turnbull, Head of Public Policy UK & Nordics, Snap Inc.

II: Monika Bickert, Vice President, Global Policy Management, Facebook; and Niamh Sweeney, Director of Public Policy EMEA, WhatsApp.

Examination of Witnesses

Witnesses: Theo Bertram, Nick Pickles, Derek Slater and Henry Turnbull.

Q1 Chair: Welcome to this Home Affairs Select Committee evidence session on online harms. We have with us today Theo Bertram from TikTok, Nick Pickles from Twitter, Derek Slater from Google and covering YouTube, and Henry Turnbull from Snap Inc, covering Snapchat. Thank you very much to our witnesses for joining us today. We are very grateful for your time. The inauguration of President Biden and Vice President Harris is now under way, and there are 25,000 national guard guarding the Capitol and the ceremony. We are very clear that there could be more violence today, and we all have in our those terrible scenes of a violent mob storming the Capitol and assaulting an open democracy. Four years ago when this Committee first took evidence on online harms, we raised the issues around people escalating hatred or organising violence on social media, but we would not have imagined that we would ever see the scenes we saw this month. Given some of the anger and extremism that we have seen, how far do you feel that each of your platforms have made this possible? Derek Slater: Thank you, Chair, both for the question and for the opportunity to come before you today to continue this conversation about online harms.

Certainly what happened this month was a terrible event, and we take very seriously our responsibility both with respect to that situation and, more broadly, with respect to providing high-quality, relevant information and addressing user safety, online harms and low-quality information. Over the years we have deployed multiple different levers to address that challenge, raising up authoritative, quality information, rewarding creators who create that sort of information, while removing illegal content or content that violates our content policies; in other words, reducing or not recommending content that is borderline.

Q2 Chair: Sure, I know that you have policies. I am just asking you to reflect. How far do you think that your platform has enabled the kind of extremism that we have seen? Not what are your processes but, realistically, how far do you think your platforms have enabled this? Derek Slater: I approach that with pride and humility in the sense that I think we have continued to improve over time, but there is always more to do. There are always new challenges, new bad actors, new threat vectors. I look at the improvements we made a couple of years ago on our hate speech policy and the way that dealt with—

Q3 Chair: Sure, and those are the things that you have done. I am asking you to have a bit of humility and tell us how much you think some of the groups that organised the assault on the Capitol were communicating,

were using YouTube to radicalise each other, to publish their videos and so on. How far do you think that was happening? Derek Slater: We took action against groups that were violating our policies. Certainly, we are looking closely at what we need to do in the future to make sure we continue to pursue it. To give one example, late last year we improved our policies around harmful conspiracy theories connected to real-world harm and aimed at individuals or key groups, such as QAnon—

Q4 Chair: Sure, but you are still telling me the measures that you are taking. I am interested, first, in how bad the problem has been. How much of this has been on your channels? Derek Slater: We have continued to make progress in removing and reducing this sort of harmful behaviour—

Q5 Chair: You are still not answering my question. I am not asking you about the measures that you have taken. I am asking you about how much of this has been happening on your channel. Derek Slater: Chair, I am trying to answer the question. I understand. When we saw content that was violative, we did take action. I think we always have to continue to evaluate, and we are still taking stock of what happened and what we can do better, so I am happy to follow up and to continue as we evaluate the situation at hand.

Q6 Chair: You basically think you have done everything you could and you have taken it all down? Derek Slater: No. I think we have to continue to improve over time. We get up every day thinking about, “How can we do better?” We are not resting on our laurels in any way. We will continue to evaluate how we did, how we are doing and what we need to do going forward.

Q7 Chair: Have you removed white supremacist material from YouTube? Derek Slater: Yes, we remove material that expresses superiority based on race, religion or other characteristics of that sort. We do remove it from YouTube.

Q8 Chair: How come I could find it just 10 minutes ago? How come I can find videos from Red Ice TV? I think you have banned the channel, but I could still find their material and their videos promoting white supremacist theories. They were being posted by other people just 10 minutes ago. Derek Slater: I am not familiar with the exact example you are talking about, but we use a combination of automated mechanisms to identify violative content. Those systems are getting better and better over time at identifying that content and ensuring it is removed before it is viewed at all, or certainly before it is widely viewed. I think last quarter 80% of the violative videos were removed before 10 views.

We also rely on and are grateful for co-operation from users and trusted flaggers like the CTIRU, and for references from folks like yourself, to make sure that we are reactively responding where automated systems are not able to detect those things. I would be happy if you sent along that information. We will be able to take a further look at it.

Q9 Chair: Do you think that none of the people who were storming the Capitol would have been radicalised on YouTube? Derek Slater: You know what, I think we have to look very carefully at this situation and make sure that we are continuing to address that challenge. The challenge of making sure we are not only removing violative content, but reducing and not recommending content that may not cross that line but borders it, is something that we need to continue to invest heavily in.

Q10 Chair: Let me turn to Twitter. Mr Pickles, welcome. Can you reflect for me how much you think this kind of problem has been enabled by Twitter or the fact of Twitter’s existence over the last few years? Nick Pickles: Thank you, Chair, for the opportunity to appear and discuss this issue. Certainly, my colleagues and I were shocked watching the events unfold in the Capitol, and I think it is impossible for anyone who works in the technology sector to look at those events and not ask, “Did we play a part in this?” I think the answer has to be yes. The link between offline harm and online content is demonstrably real. Four years ago, you may have heard a different answer to that question.

Looking at it, did we move fast enough on some of the conspiracy theories? One of the challenges is that there has not been an obvious tipping point where you would say, “This was the point where we should have taken action.” Last year we looked at QAnon, for example, and de- amplified it, made it harder to find and did not recommend it, but we allowed the speech to continue. This year we changed our approach and aggressively removed 70,000 accounts related to that. Now, if we were to reflect on our actions, should we have taken more aggressive enforcement action earlier? I think we have to say yes.

The challenge right now is to look at our services and say, are the policies that we have now the ones we had in 2016? They are not. We have strengthened them significantly. Have we enforced them rigorously and consistently enough? Again, I think we have to say we have more work to do to enforce our policies consistently. Ultimately, are we willing to take the hard decisions when needed? Looking at our actions around the Capitol riots, ultimately suspending the personal account of the President of the United States was an unprecedented moment, but it was also a reflection of our service’s role in offline events. We saw how those tweets were being interpreted, and we took the decision to remove that account. We definitely have more to do and more to learn, but I also think we have made progress since 2016.

The way that the immediate ecosystem as a whole works, people may not be able to tweet but they can go to other platforms. They can go to TV stations, some of which are still repeating the same lies we saw about the election and its integrity several weeks ago that we at Twitter took action on. Looking at that, at what Yochai Benkler called “networked propaganda,” the whole media ecosystem, this has been a deeply troubling and shocking time for everyone who works at Twitter. I am not going to tell you that we have got everything right, because the honest answer is that we have not.

Q11 Chair: If you were looking back, say, two or three years ago, with hindsight or in retrospect, what ideally do you think all platforms, including yourselves, should have been doing two to three years ago? Nick Pickles: Certainly, if you look at the policies around conspiracy theories, the undermining of trust in institutions and civic processes is something where the policy we had in 2016 and 2018 was strengthened; we strengthened it again in 2020 for the election. But the harm that is done by people not being challenged when they attack the integrity of civic processes is something that has a deeply corrosive effect on civic discourse and can lead to the kind of violence that we saw in the Capitol. Moving earlier to be more aggressive in providing context through labels but also outright removal of people.

You may have seen we launched the strike policy. Previously, we allowed people to share information that broke that policy. We provided a label. Now we have published a strike policy that says, “If you do five of those tweets, that is a permanent suspension.” Recognising that harm in the conversation around civic events is absolutely a lesson we should learn across every platform.

Q12 Chair: Do you have a policy of keeping white supremacist and neo-Nazi accounts off Twitter? Nick Pickles: We have a policy specifically addressing violent extremist groups, which looks at ideology and those groups being engaged in violence. We launched that policy in 2017. It is a good example, again, of a policy where the nuance around speech and violence is something we better understand now. We have taken action on more than 200 groups globally under that policy, more than 100 of which are white supremacist, neo-Nazi organisations.

Q13 Chair: Would you ban white supremacist groups where the tweets themselves do not advocate violence but are clearly promoting hatred and racism? Nick Pickles: For the purposes of the violent extremist group policy, we look at the group identity, so we do not look at individual tweets. We are looking at, for an organisation like National Action, are you supporting them? Are you affiliating with them? We then have a separate set of policies that look at questions of hateful conduct and glorifying violence. Again, we have recently expanded our policy around dehumanisation,

which was an area of content that we did not previously capture in our rules, and we are also expanding our policies on incitement. Particularly incitement to harassment was something that we felt was not strong enough, and we launched that policy yesterday.

Q14 Chair: Do you include incitement to hatred, or do you just include incitement to violence, when you are removing content? Nick Pickles: Incitement to harassment would capture that as well, and—I think this is a good example of how, as we change our rules, people change their behaviour—we have a longstanding policy on incitement to violence, but people would use coded language. They were trying to skirt that rule, and so, as you say, they would incite discrimination. This was a policy we launched yesterday, so I am certainly not saying we have got to a point where the content has all been reviewed and actioned, but incitement is definitely an area where we felt we needed to do more work to refine our policies.

Q15 Chair: Let me turn to TikTok, to Theo Bertram. What is your reflection on the level of incitement in the content on your platform? Theo Bertram: Thank you very much for having us. TikTok was not around when President Trump came to office, and I do not think we have been as prevalent in the recent events as the other platforms, but undoubtedly this is a challenge for all platforms, including us. We tackle these issues in three broad ways, and I am happy to dive into any of them.

Q16 Chair: Before you get into the actions you take, what I am interested in is your reflection on how much of a problem TikTok has and how much your platform might be facilitating the kinds of events that we saw at the Capitol. Theo Bertram: I can give you the numbers and I can give you the detail of the policies, but broadly I do not think we see it as a big problem yet. I think we anticipate it as a problem that will come.

Q17 Chair: When the Anti-Defamation League Centre on Extremism talked in August 2020 about “the sheer volume of readily accessible white supremacist, anti-Semitic and otherwise hateful video content on TikTok”, were they just wrong? Theo Bertram: We have worked with the ADL, and that was August last year, and we have developed our policies further since then. Obviously, we are not 100% and we need to do better. I fully accept that. You asked me to make a broad judgment. On the broad judgment, I think we are doing a good job. We can always do better.

Q18 Chair: In your assessment you do not think you have a huge problem at the moment? Theo Bertram: That is not what I said, no. You asked me relative to the others, relative to the situation in the US. I can give you some precise

numbers, but this issue is one that we all have to tackle. We do it by design, with policies and with enforcement. It is a challenge for all of us.

Q19 Chair: In June, the International Institute for Counter-Terrorism highlighted anti-Semitic content on TikTok, videos of Nazi rallies and . Other organisations are identifying content on your platform that you are not identifying. Theo Bertram: We have proactively removed about 80% of hateful content. That overall proactive number is below the proactive removal rate for other content, and it is something we are working to improve. We work with a number of third-party organisations to help make sure we can design our policies and processes so that we capture these organisations and remove them.

Last year we changed our policy around hateful racial attacks so that we broadened—you have protected groups and you have attacks on protected groups. One way we saw that this was happening on our platform was that the far-right extremist groups were deliberately sailing close to the wind and finding codes and ways to get around our policies, so we have broadened the policy that we have, which is something we worked on with the organisations that you have mentioned and with others. Now, for example, we would not just remove “on the face of it” anti-Semitic attacks. We would remove proxies for that, such as conspiracy theories around George Soros. We are continually evolving our policies. We have strong enforcement in place, but I accept we can do better.

Q20 Chair: Henry Turnbull. Henry Turnbull: Thank you, Chair, and thanks for the opportunity to be here today.

Reflecting on the events this month at the Capitol and the radicalising speech that led to those scenes, I think it is well recognised that we have done a good job in keeping that kind of content away from public areas of Snapchat. As background for the Committee, Snapchat is a very different platform from traditional social media. It has never been an open, town square style platform with a focus on open debate. Rather, the core functionality of the app is private messaging, so the public side of Snapchat, which is our Discover platform for news and entertainment, and our Spotlight tab for the community’s best snaps, is a curated and moderated environment, which means that we have always chosen what content is surfaced and promoted there. The most important factor for us in determining what content we promote there is whether content complies with our community guidelines, which are publicly available online and apply to all content on Snapchat.

Q21 Chair: Once again, I am not currently asking about the policies so much. I am just asking for your reflection on how far you think your platform has contributed to some of the problems we have seen.

Henry Turnbull: Reflecting on the events at the Capitol and the open speech and content on some social media platforms that contributes to that kind of real-world violence, I think we have done a good job at preventing that kind of content from being surfaced on Snapchat. This is not to say that there is no illegal or harmful activity in the private areas of Snapchat, but, in the public accessing content that is surfaced publicly to users, I think we have been able to stop that content from being surfaced to users.

Chair: We will come back to some of those issues. Thank you. Q22 Ms Diane Abbott: This question is probably for Nick Pickles. The Committee is, as are the public as a whole, interested in the action taken against President Trump and the permanent suspension of his account. Some commentators have tried to argue that this represents political censorship, but it seems to me that the issue here was incitement to violence. Nick Pickles: This is obviously an issue that has garnered much international debate. To highlight the complexity of these issues, we published a blog post when we took this position, which specifically outlined the reasons for taking this action. Our policy on incitement to violence was the basis of this action, in particular looking at the way that the tweets from the personal account of the President were being interpreted by groups who were planning further violence. I note that the political accounts of the institution were still on Twitter, so @POTUS and @WhiteHouse were still there, but the personal account of President Trump was actioned, and we published a blog post to explain those actions to the world.

Q23 Ms Diane Abbott: Thank you. That was not meant to be a trick question. I think the issue with President Trump’s account was incitement to violence. I noticed, though, after you permanently suspended his account, that your CEO Jack Dorsey said, “A ban”—which clearly he was sad to have to do—“is a failure of ours ultimately to promote healthy conversation.” In your opinion, what more could Twitter have done to prevent the situation escalating to this level? Nick Pickles: I think this speaks to the balance of interventions that we have. Two years ago we introduced a range of labels. Previously, when I joined the company six years ago, we had two choices: to remove something or to leave it up. We need to move beyond that binary choice of content moderation to the question of having different product interventions, using things like time-outs more, using more labels and also having clearer, more detailed communications with our users generally about what Twitter’s policies involve. There is a whole range of things that we can do, and it speaks to the broader question of how people use social media. Once they are permanently suspended, how do we think about the long term and how that impacts the public debate, which I think Jack spoke about in that tweet thread. We are now

increasingly investing in the combination of product interventions, more labelling and more time-outs.

Q24 Ms Diane Abbott: I think there is a difference between something that is quite clearly an incitement to violence, as was the case with President Trump’s tweets and other key actors, and Twitter being almost a cesspool of ugly and unpleasant conversation. The last time somebody surveyed this, I, as a Member of Parliament, got more racist and sexist abuse than all the other women MPs put together. The problem with the general atmosphere on Twitter is not that it has driven me out of politics—it clearly has not—but it is very off-putting for young women thinking of going into the public space if Twitter does not appear to be doing anything, not to take down people who are clearly on the wrong side of the law, but to try to promote a healthier conversation. Nick Pickles: This is where the question of consistency of enforcement of rules comes in. Clearly, we have more work to do to make sure that the abuse that you and others receive is responded to equally and as vigorously: making sure that we are enabling our users to control their experiences, launching things, for example, that mean you can control who replies to tweets, as well as more technology to find those abusive and harassing tweets.

At the same time, you are absolutely right, the platform-wide norm that we want to seek is to raise the bar of healthy conversation across Twitter, and that is why we have spoken about this issue so much. One of the challenges—certainly when you have a political climate and a media ecosystem that is incredibly hyper-partisan, incredibly divisive—is balancing the impact that those figures have on Twitter with the impact it has on everyday discourse. That tension is increasingly difficult to distinguish, but it is also something that we recognise as a problem we have to tackle.

Q25 Ruth Edwards: I would like to follow up on Diane Abbott’s questions, please, Mr Pickles. Like other social media companies, Twitter has for a long time maintained that it is a platform not a publisher. Are you at all concerned, given the controversy of your suspending President Trump’s account, that this action has undermined the way you categorise yourselves? Nick Pickles: This is an issue that we have been discussing for several years. I saw the Culture Secretary had a piece in saying that we need to get beyond this publisher-platform distinction because everyone recognises that there is a different category. Certainly, the idea of taking a regulatory system that was created for newspapers and trying to fit the entire internet into that regulatory system is not going to give good public policy outcomes.

The question for us is that we should publish our rules, and we should enforce our rules consistently and impartially around the world, for all of our users. Then we should be transparent about how we made those

decisions and give people a good process for appeal. That is how we do it as a service, but this question of platforms and publishers is separate from the question of companies enforcing their rules.

You heard previous witnesses say that different services are different services, and we want to protect that competition and that diversity. Some of the troubling conversation around the publisher framework is that you lose that diversity and competition, and ultimately you would have less speech on the internet. I think it is time to move beyond the publisher question and look at how companies enforce their rules, and then rebuild trust in those processes.

Q26 Ruth Edwards: That is very interesting, and I certainly agree with you that consistency and transparency are what is needed to maintain trust in your platforms. How do you ensure that your rules are enforced consistently? Nick Pickles: We have a number of processes. The first is publishing our rules. We try to have a combination of every rule being tweet length, so they are very short, but we have published detailed guidance on what does break this rule, what does not break this rule and how you report it. Having content moderation guidelines be public is the first piece.

The second piece is, when someone breaks the rule, telling them what rule they break, and again that is communication we did not have when I joined the company. We do now. Giving people a right of appeal, and then publishing statistics about what we have been doing under our rules. For the past three years, we have expanded every year the types of data we make clear. We then have to do internal training and quality assessment. One of the big challenges for our industry going forward, and certainly mindful of conversations around online harms legislation, is how we make more of the work that happens behind the scenes to ensure that quality and rigour is opened up for public assessment. That is why transparency across all our operations is critical to this question.

Q27 Ruth Edwards: One of the reasons that people think things are not enforced consistently is the different treatment they see given out to different accounts. On the one hand, you have President Trump’s account suspended for inciting violence. Fine. On the other hand, your platform continues to allow a platform to embassies of the Chinese Government, based all over the world, to defend and justify the violence and the genocide they are carrying out against their own people. How is that consistent? Nick Pickles: Last year we recognised this was an issue, and one of the challenges was that we saw people engaging with accounts perhaps without the context. As well as the official accounts, you have personal accounts of diplomats and of journalists who work for state-affiliated media organisations. Last year we rolled out across the P5—UK, US, Russia, France and China—labels specifically on Government accounts and on media accounts. If you are engaging with one of those accounts,

you know explicitly, “This is from a Chinese account”, “This is from Russian state media.” If you click through to that label, we give you the context that Twitter is blocked in that country by the Chinese Government. That context allows an informed debate.

The broader question of removing accounts to protest against censorship of Twitter in China does not further the public conversation, so that balance is something we are striking right now. The fact that this conversation is happening in public on Twitter gives us a greater global public conversation to hold Governments like the Chinese Government to account.

Q28 Ruth Edwards: How is that consistent with the action you have just taken against President Trump? At least the American public can take to Twitter to talk about how appalled they are with their President. The Chinese people cannot take to Twitter to give their own voices at all because, as you say, it is banned in China. It is the issue of consistency that is so troubling. I am looking at a tweet on my other screen that was retweeted by the Chinese embassy in the UK three days ago, and they have retweeted a tweet that says, “Forced labour is the biggest lie of the century, aimed to restrict and suppress the relevant Chinese authorities and companies and contain China’s development.” You are right, it does have your Chinese Government label on there—it is pretty obvious where it is coming from—but I cannot understand how that is consistent with the action that you have taken against the Twitter account of the former President of the United States. Nick Pickles: This is where we have to do more work to explain in detail how our rules are enforced. We published a blog post looking at the issue of, for example, world leaders, recognising the geopolitical conversation. Again it is a question of, “How do we best respond to statements like the one you just read?” Is removing it going to inform anyone? Is removing that content going to facilitate further public scrutiny of that Government action? This is something that Jack Dorsey, our CEO, spoke about in the thread that Ms Abbott referenced, that fragmenting the public conversation and taking content away from it is also a negative that we have to weigh in our actions. The kind of conversation we are having now and the kind of debate you had in Parliament yesterday about the actions of the Chinese Government, that transparency and that public conversation is vital, and it is not served by removing content in the way you just mentioned, unless it clearly violates our rules. My understanding is that, currently, the tweet you read does not violate our rules.

Q29 Ruth Edwards: I can understand your argument for keeping it up. What I find confusing is how that is possibly consistent with the action that you have just been taking against President Trump’s account. It seems to be very different. Okay, this particular tweet may not violate your rules, but we all know what it is referring to. I find it very strange that you are adopting such a different approach.

Nick Pickles: This is the complexity and the challenge of these issues. It is why we published a very detailed blog post citing the specific tweets from President Trump that we saw inciting violence in real time. We saw messages from people on other platforms talking about taking violent action in the United States, and that is why we look at it from an incitement-of-violence piece. But you are right, there is a whole suite of policies and we continue to keep them under review to ensure that, when questions like this arise, we are taking the right action. But as I say, the best response to those kinds of tweets is public scrutiny and public debate and holding Governments to account. Content moderation is not a good way to hold Governments to account compared with the actions of the international community that can scrutinise that content.

Q30 Ruth Edwards: Is the issue one of timing? If it is happening in real time and it is inciting violence as it happens, it gets taken down, but if it is justifying violence, enforced labour or something in hindsight—although really it is still carrying on—that is okay, is it? Nick Pickles: This is why we have a range of different rules. I think the tweet you speak about is clearly very political. The reason it is being discussed today is because it is very political in nature. On the speech from the President and the tweets that we took action on, we outlined why the context of those tweets was leading to discussion of violence being committed in other places again in the future, and those tweets incited further violence. That is a difficult balancing act and, as I say, we continue to keep our rules under review in this area to make sure that we are striking that balance. But it is certainly not something that we would say is easy or that we have finished working on.

Chair: While we are on Twitter content issues, we will go to Laura Farris. Q31 Laura Farris: Nick, I want to ask about action on harmful content. You may be aware of the piece that was written in by Nicholas Kristof earlier this month about Pornhub. It was a pretty powerful exposé that showed it had content containing child sexual abuse, human trafficking, and there were testimonies of under-age women who had reported to Pornhub there were videos of them being raped. It had not been removed, or it had been removed but only after lawyers became involved. As a result, Pornhub has radically changed its offer, and I think it has removed a big chunk of its content. Also, I think Visa and Mastercard withdrew their services. It was a pretty transformative moment for that company. Twitter publishes a lot of Pornhub content. I want to ask you about the limits of your own sense of responsibility in that regard. Do you consider yourself to have no responsibility because it is an external company and can do what it likes, or do you have any verification procedures that make sure that content that you publish does not inadvertently contain some of the illegal material that Pornhub has just taken down? Nick Pickles: Thank you for raising this. I am familiar with the general story that was written. For several years Twitter has had policies

specifically covering both child sexual exploitation and what we consider non-consensual nudity. That policy goes beyond content that would be considered graphic pornography and includes things like intimate imagery and what we have seen around the world in some countries, upskirting for example. When we rewrote our policy several years ago, we explicitly included all of that. We take action on tens of thousands of accounts every year, specifically for breaking our child sexual exploitation rules and our rules on non-consensual nudity. We also have a range of partnerships, so we are able to use technology to identify this content.

Q32 Laura Farris: Pausing there, just so I can have this on the record, if you were featuring the work of another company, say a pornographic company, would you faithfully reproduce what they tweeted, or would you have some sort of moderator who would assess it and say, “I am not sure about this”? Would you take it on trust that Pornhub was a reliable corporate entity and, therefore, could tweet in the way that any other company might tweet, or do you have a separate person who is designated to look at that content and make judgments on their own? Nick Pickles: I would probably distinguish between links. If someone posts a link to a third-party website, we have a set of policies in place to specifically govern that. If we were made aware of a link to a piece of content on Pornhub that was, for example, child sexual exploitation, we would stop people sharing that link, irrespective of what Pornhub did. We have a set of policies on the link itself. If someone was to post content to Twitter, we have a limit where you can post only two minutes and 20 seconds of video to the platform, so they tend not to be the full pieces of content. We then have a separate team and a separate set of policies specifically looking at that, as well as using a lot of technology. We look at that saying, “Does this match a previous piece of content that has been identified as child sexual exploitation?” That is a partnership through NGOs like the IWF in the UK and the National Centre for Missing and Exploited Children in the US.

If we identify an account that has shared child sexual exploitation, we also inform law enforcement so that law enforcement could follow up on that individual. Twitter has had these partnerships and these policies in place for several years. I cannot speak to what Pornhub was doing, but if someone shares illegal child sexual exploitation material on Twitter, we are going to report that to law enforcement, who will take appropriate action. One of the questions that has arisen is whether that has been happening in other services.

Q33 Laura Farris: I have two follow-up questions on child sexual exploitation specifically. Are you confident that nothing you broadcast in relation to Pornhub would stray into child sexual exploitation, or are you completely dependent on somebody else alerting you to the fact that it might? Nick Pickles: Again, if it is a link to Pornhub, that is a different process from someone posting video on Twitter. We use a range of technology, things like photo DNA, which are well understood. If that content

matched previously shared and identified child sexual exploitation material, we look to identify that proactively, not waiting for user reports. Equally, we receive reports from users and partners like the IWF, where we take that action irrespective of what Pornhub did. The answer is that we proactively look for this.

Q34 Laura Farris: Some of the staff on this Committee found some material that they looked at—in fact there are some images that have been posted in our briefing note—from November and December of last year. It is somebody called Matthew Estes, who had a username that did not reveal his name, but he is a convicted child sex offender in the UK. He posted tantalising pictures, not explicit, of himself with a child, and had a sequence of comments underneath those photos from people who were either familiar with his video content or wanted to find out more. Then people in a slightly subtle way, but pretty obvious, directed one another to places where you could potentially find that. That was running for maybe four to six weeks on Twitter. When the Committee members here tried to report it, they could not find anything through your dropdown menu that indicated that what they were looking at was child sexual exploitation. Can you assist the Committee with why you do not have a report function for child sexual exploitation? Nick Pickles: Thank you for raising this. In 2021 we are undertaking a review of our entire reporting flow, specifically to investigate questions like this. We have a webform through our help centre to report this issue. You are right, we do not have it in-app. One of the challenges that we have seen, and I know other partners and platforms have seen this challenge as well, is that if you include an option for something like child sexual exploitation it is used by people trying to get content removed, so you end up with a significantly high number of false and not accurate reports, which can divert resources away from finding content. The Internet Watch Foundation, for example, has a public reporting channel, and I think only about 11% of the reports they receive are accurate reports. There is a trade-off between allowing easy reporting and creating potentially a lot of inaccurate reports, versus focusing our energy on partnerships and technology. It is a trade-off that we have made for several years, but it is something we are looking at this year to see if we should expand the range of reporting options in the reporting flow.

Q35 Laura Farris: I have one final question on this. I think this person was taken down, or his account was suspended—that is perhaps the right term. When you look at the people who were linked up and directing one another, I think it was found that their accounts remained live. Is it part of Twitter’s function to police the ecosystem that might exist around a provocative post, particularly if it was delving into that kind of content? Do you look at those people, and would it be okay for them to comment? Nick Pickles: No. Our policy covers the commentary around this content as well as the content itself. We are grateful to the Committee for bringing this to our attention. It was definitely something that our technology had not picked up, because the way that people were

communicating, the use of coded language, trying to discuss getting off Twitter, was something that our technology did not pick up. The content being shared was more in the provocative space than images that we had previously removed for child sexual exploitation. But when this was brought to our attention, we investigated it and we have reported the appropriate offence to law enforcement as well.

Q36 Laura Farris: Does that include the people who commented? Nick Pickles: I am not sure if it was specifically every individual account, but several accounts were reported to law enforcement. I can follow up on that afterwards, if that is helpful.

Q37 Chair: To follow up that answer, Mr Pickles, are you saying that you do not allow a specific channel for reporting child sexual exploitation or abuse because lots of people who want to complain about other things will then complain in that way? Surely all those people, if they are that keen to get another piece of material taken down, will be putting in a complaint somewhere else anyway. You will still be getting those complaints. I do not understand how allowing people to flag something as child sexual exploitation means that you get more complaints than you would have done otherwise. Nick Pickles: This is exactly the point. First, we do allow reports through the dedicated webform; you can go to our website and report it. It is not in-app right now. This is why we are doing a review. One of the things that we have become significantly better at in recent years is filtering and prioritising user reports. That is why we are now looking again at this issue, because it might be something where our technology is good enough to try to help us prioritise those signals. This is very specialist content, so it does not go to a general policy team; it goes to a specialist team. That is why we have to try to prioritise.

Q38 Chair: But you do need to make it easy for people to report this, and it was not for our Committee. We have Committee staff who were looking at this and who were determined to report it once they saw it. Had this been somebody else scrolling through, just a bit uneasy about it, had not investigated, they might not have ever reported it. Nick Pickles: That is exactly why we are doing the review this year on expanding our reporting process, including having this issue in the app.

Q39 Simon Fell: I am afraid, Mr Pickles, I am going to be sticking with you for a couple of questions. Going back to Diane Abbott’s questions, we have talked about President Trump, but we also have British citizens who have been suspended and banned from your platform for sharing offensive content. I am thinking notably about Stephen Yaxley-Lennon. I am interested in the issue of profile versus presence on your platform. He may have been banned, he may not have a profile, yet there is content on there that is produced by him, starring him, shared by either puppet accounts, from the ones I have looked at, or his supporters and followers. It is just as harmful as him being there in person. I am curious about

your view as to that line between profile and presence, how you think it should be most effectively acted on, and whether you consider you are effectively acting on it at present. Nick Pickles: This is one of the biggest challenges we have. Certainly, for example, looking at recent events around the world, people will post content to Twitter to condemn it, to draw attention to it and to say, “Did you know someone has just said this in another place?” One of the challenges is that then reproduces the very content from an individual who we have previously banned from Twitter. We try to draw the line currently around: is an account being created to evade our ban? If we suspend an individual and you then create an account devoted only to sharing content from that individual and you call it a fan account, that is something we would look at and say, “Is this content being shared potentially by people connected to the original individual to evade our suspension?”

We draw a distinction between, “Are you evading a ban?” versus, “Are you sharing this content because it is off-platform commentary?” The content itself may not break our rules. That balance is the one we are constantly trying to strike, but particularly when people are sharing content to condemn and to raise awareness, we take that context into account. If you share it to praise, to glorify, and the content being shared breaks our rules, or potentially includes a threat of violence or other violations of our rules, we remove the content, whoever shared it. But this question of sharing content, including from news broadcasters and news organisations around the world, is one that is a very hard line, and it is very context-specific.

Q40 Simon Fell: Where I am most interested is people may film an event, they may share it, and that is part of the news flow, and it is part of what your platforms are there for, whether we like it or not. But where there is content that is produced specifically for your platform to evade, someone who does not have a profile and gets around that block you have put in place, do you feel you should be more proactive there, or where do you draw that line? Nick Pickles: I think we both agree that that line is very hard to define. As I say, the context that the content is shared in is as important as the content itself. We certainly do not want to stop people highlighting what is happening off Twitter, but we have also seen, for example, individuals that we have taken off Twitter unwittingly being given back a voice on Twitter by people for a variety of reasons. This question of allowing people to reshare content that perhaps glorifies or praises, to the Chair’s previous question—if it was a violent, extremist group under our policy and you shared that content to glorify it—we would remove that content. The challenge is individuals who may have particular views that are shared off-platform, and then explain the consistency of sharing it to say, “This is what someone has said.” We continue to look at and refine that consistency and enforcement challenge. I agree with you, it is a very difficult line to draw, and we want to make sure that people can hold to

account, call out and expose action happening off Twitter. We continue to look at how that is evolving.

Q41 Simon Fell: I would like to move briefly on to online harms, so I will give you a break for a second, Mr Pickles. We have the Government’s online harm proposals out now. It seems likely you will be classed as category 1, so having to take action on legal but harmful content. What does this mean in practice to you, and what are your general thoughts on those proposals, bearing in mind our time constraints? Theo Bertram: We welcome the proposals. I think it is in line with the trend we see globally for regulation of content; moderation in a way that historically we have seen content with privacy. Although you say we have seen the proposals, I think there is still some detail to come on that, but we certainly welcome this first step.

Derek Slater: Similarly, we cautiously welcome what has been put forward. We are still digesting it. We have not waited for regulation in this area. There is a lot of detail still to be sorted out: what defines “legal but harmful”, what measures need to be taken, and what is a risk assessment? Getting those details right is that much more important because of the very significant penalties that are proposed there and the knock-on effects, the unintended consequences that may have for lawful speech and for investment and innovation. We have to take a very close look at it, but we are grateful for the thought that has been put into it, and we look forward to continuing the conversation going forward.

Henry Turnbull: We absolutely support the key planks of policy that are outlined in the full Government response. On the idea of principles-based regulation, based on a statutory duty of care, we think it is really important that regulation is independent from the immediate political goals of the Government of the day. We very much support the proposed role for Ofcom as a trusted, credible regulator. As the others have said, I think there is plenty more detail to come, particularly in the Bill, and we will need to look at the precise provisions in the Bill to determine which company is in which category and that kind of thing. The key difference between category 1 and category 2 seems to be an obligation to take action against this category of legal but harmful content, fake news, self- harm and so on. Ultimately, these are all things that are already prohibited on our platform and that we already take action against.

Nick Pickles: I echo the comments from my colleagues. I will make three suggestions where I think the legislation could perhaps be stronger. First, we are all the user side. There is a supply-side question of incentives. One of the questions that is not in scope right now is: are there financial incentives for bad actors to create disinformation, to create harmful content that is being monetised through the online ecosystem? Not addressing the incentives question in online harms seems like a missed opportunity. Secondly, Twitter took the decision to ban all political advertising. The question of online political advertising regulations still seems to be up in the air.

Finally, transparency. Often some of the questions we have from Committees like the Home Affairs Committee are around the relationship we have with the CTIRU and the National Crime Agency. Expanding transparency of Government functions in this area, how many requests are sent, how many responses are received, how many platforms are being talked to for those illegal content requests would help inform the public policy debate more broadly for all of us.

Chair: Thank you. We are joined on the Committee today by Julian Knight, the Chair of the DCMS Committee. Q42 Julian Knight: Thank you, Chair. Following up on Mr Fell’s questions, Mr Slater, I am quite interested, quite struck, by what Nick Pickles has just said about not monetising disinformation. This is obviously a complaint that has been had with Google and YouTube in the past, that effectively bad actors have been able to make money out of disinformation. Do you think it would be wise for you to adopt a new policy where you kept money earned on your platforms in escrow prior to its distribution, so that for any cause in which disinformation is found to have taken place you could withhold that money? Is that something you would look to explore, Mr Slater? Derek Slater: Thank you for the question, and I appreciate this concern. To be very clear, we have policies around use of our tools to monetise content, to run ads that deal with things like harmful misinformation of various sorts, including harmful misinformation around vaccinations. That is something we have worked closely on with the UK Government and we are 100% committed to taking action in this area. The proposal you raise is an interesting one, and it is something we need to continue to interrogate very carefully, mindful of the legitimate actors that use these services to run a small business or to support their news publication.

Q43 Julian Knight: With respect, Mr Slater, if you sell on eBay through PayPal, you wait time to get your money. I do not think that is a particularly legitimate response. What is stopping you doing something on this? It would potentially stop this overnight. Derek Slater: It is an important consideration. We want to weigh and balance these concerns well, and we will continue to look at this very carefully.

Q44 Julian Knight: That is sort of like you will wait and see. How long will it take you to consider such a policy? When will we know whether you will adopt it or not? Derek Slater: We will continue to look at where we can put this in place, where the benefits will be there.

Q45 Julian Knight: Give me a timeframe, please, Mr Slater. How long? Derek Slater: We will be happy to follow up in writing to talk about that.

Q46 Julian Knight: No, a timeframe would be very helpful right now, rather

than following up in writing. How long will it take? Will it be six months, will it be a year, will it be two years? How long do we have to wait? Derek Slater: We are constantly evaluating our policies, making sure they are fit for purpose.

Q47 Julian Knight: That is not an answer, but okay. You will write to the Committee to set out a timeframe. Do we have that commitment from you? Derek Slater: I am happy to follow up in writing.

Q48 Julian Knight: Setting out a timeframe? Derek Slater: Again, we continue to re-evaluate our policies.

Q49 Julian Knight: We could be here all day with that line of questioning. Thank you very much, Mr Slater. Again following Simon’s question, a very good question, I presume that most of you will be category 1. Are there any platforms that you think should fall into category 1 that perhaps have not had the same level of public scrutiny as your companies? Henry Turnbull: I really cannot comment on other platforms and the extent to which they should be categorised. I can talk about our platform and the measures that we put in place to address harmful content.

Q50 Julian Knight: I know you can talk about your own platform. Are you not aware of your own ecosystem and think it would be helpful for someone with your know-how to put forward potential platforms to be in category 1 to help out legislators in this respect? Henry Turnbull: At this stage the information in the formal Government response is pretty limited as to what constitutes a category 1 or a category 2 platform. We all need to examine the Bill to determine the extent to which we are covered, let alone commenting on other platforms.

Q51 Julian Knight: Mr Pickles, you are almost certainly going to be category 1. Lucky you. Who else should be category 1? Nick Pickles: I must confess that I am not familiar with the specific delineation between category 1 and category 2. To my previous answer, there are probably companies in what I would call the tech ecosystem, rather than social-media ecosystem, that may be advertising networks or content distributors. There is an interesting question around infrastructure layers and certainly, given some of the recent debates, around provision of services to apps and things. I think the question of how app stores operate in this space is something worth looking at. The big challenge is looking at this as a technology ecosystem rather than as a social-media ecosystem. Again, to my previous comments, that may not be happening because of the way the legislation has been crafted and scoped, that companies who provide infrastructure and companies who provide app stores are perhaps not in scope. I apologise that I am not familiar enough with the details.

Q52 Julian Knight: No, Mr Pickles, that was a rare foray into an actual answer, which is very welcome indeed. Mr Slater, same question, who else should be in category 1? I am certain you will be category 1. Who else should be there? Derek Slater: Broadly in line with some of the comments of the others who have spoken so far, there is more to be elaborated on. Who goes in what category? You are right that a myopic focus on only a small set of companies may leave out smaller players that are bad actors, that are trying to incite violence and so on. We need to keep that in mind, making sure that like is treated alike, but different is treated differently. That is something where, to the Government’s credit, there has been some real thought in the paper, and we need to continue on that.

Q53 Julian Knight: Mr Bertram, you have the graveyard shift. Who else should be category 1? Theo Bertram: I do not know whether we will be category 1, but I think we are a challenger and, ultimately, we would like to be category 1, so we will work on that basis. If we look ahead to when the Online Harms Bill is introduced and what happens after that, could we anticipate a problem where companies deliberately keep their numbers below the limit to avoid the higher level threshold? I suspect you could have some flexibility for Ofcom to be able to intervene and look at that. At the moment, in a way, there is some flexibility there. We should probably think about where the harm is, rather than just how big you are. These platforms are here today because they are responsible, and I suspect the real harm will come from those that do not come to Committees like these.

The other area where I would encourage more scrutiny is the exceptions that are being created, and by that I mean particularly around journalism. It is absolutely right that we have exceptions for journalism, but I also know that the pattern of behaviour among the far-right extremist groups is that they will take on the guise of whatever clothes, whether it is politics or journalism, to exploit any loophole. That is where I am most worried.

Q54 Julian Knight: Your question, effectively, is about whether something is a trusted news source, and how we delineate that as legislators. Is that right? Theo Bertram: It is really hard. The White Paper also talks in a slightly different section, the advertising section, about the need for exceptions around politics. If we look over the last 10 years, one thing we know is that those extremist groups will use both politics and journalism as a cover for hate.

Q55 Julian Knight: Yes, I notice what I would almost call phoney news organisations, effectively. They set up for a short time, they have a very plausible name, a little like a telemarketing organisation does, and they pump out not disinformation but potentially harmful content. That is a

very interesting point, Mr Bertram. Mr Turnbull, your colleagues admitted to the DCMS Committee in evidence that your age verification tools had been failing, allowing children to sign up for the app, and you undertook with the Home Office to address this. Do you have any figures on how many children have been able to evade your age verification tools? What work have you done with the Home Office since then to put it right? Henry Turnbull: I should be clear up front that we do not want under- 13s using Snapchat. If we become aware that anybody using Snapchat is under the age of 13, we will delete that user’s account. There are also measures we can take like blocking their device. On how many under-13s are using it, this is not a data collection exercise for us. If we find that somebody using Snapchat is under 13, we just immediately delete that user’s account.

On the work that is being led by the Home Office to consider the issue of age verification, we have contributed a couple of ideas, which I can discuss here if it is helpful, on both a short-term solution to addressing this—

Q56 Julian Knight: Ideas are all very good, and that is fine, but I want to get to the bottom of how much involvement you are having, considering you have been found to be failing in this area and you have committed to working with the Home Office to address this. Can you please give a sense to the Committee of precisely what you have done and what actions you are taking? Henry Turnbull: I push back slightly on the central statement that we are failing in this area. I think our processes—

Julian Knight: With respect, Mr Turnbull, that came from your own colleagues in evidence in front of my Committee, so it is from your own mouth. Henry Turnbull: I believe this was a session a couple of years ago. Our approach to age verification is pretty consistent across the range of online platforms that are here today. If we find that somebody using Snapchat is under 13, we will terminate their account. We will also look to take measures like blocking the device. We rely on self-declared—

Q57 Julian Knight: We get the fact that you do things now, and I have to say that you have not explained to me precisely what you have done since two years ago when you gave that evidence. Again, I reiterate my question to you, Mr Turnbull: what are you doing with the Home Office to address age verification issues on your app? Henry Turnbull: We have regular meetings with the Home Office on a range of issues, including the subject of child safety and age verification.

Q58 Julian Knight: Are these meetings just you and the Home Office, or are they with a coterie of companies?

Henry Turnbull: We attend a number of meetings, individual ones on the issue of child safety specifically and issues like age verification, as well as attending roundtables arranged by the Home Office.

Q59 Julian Knight: What have you done in the last 12 months to sort this out, specifically on your app? What have you done? Henry Turnbull: When you say “sort this out”, the issue of age verification is a very complex one. There are no easy answers. The Government had to pull back their own proposals for age verification on pornography sites in 2019, so the idea that there is a simple solution to this problem is wrong. As I say, we have been contributing some ideas to discussions on age verification that I can discuss here, but the Government have not got particularly far in their solutions for age verification because of all the challenges that exist. There isn’t a simple solution.

Q60 Julian Knight: Social media companies always talk about how they want to do self-regulation and terms and conditions, but now you are saying the Government are not leading the way on this. It is up to you, is it not? Henry Turnbull: We are absolutely not advocating for self-regulation. I think the Government’s online harms proposals are the right ones. We have been contributing to that discussion. I can talk about some of the things we think would help with the issue of age verification if it is helpful, Mr Knight.

Q61 Tim Loughton: I have been on this Committee for more years than I care to remember, and we have had the social media companies in front of us on multiple occasions. Typically, a different representative will come along each time. Members of the Committee will raise various insightful or outrageous postings. You all agree that they are outrageous and not in the spirit of your own company’s rules, agree to look into them and the next time you come back we find those posts have not been taken down and nothing has been done. Not wishing to break with tradition, Mr Pickles, back in 2017 when Twitter was confronted about some anti-political hate tweets—it was at a time when there had been a lot of problems around anti-Semitism and attacks on Labour MPs who tried to call it out—I raised the subject of the hashtag #KillaTory, which by any reading falls outside your guidance, and you promised to have a look at it. When I looked into #KillaTory again today, I got, “If everyone who tweeted about Thatcher killed one Tory each the country would be fixed in time for Corrie.” Another one is, “I really see nothing to celebrate Trafalgar Day so why don’t we replace it with Kill a Tory Day” or, even more recently, “Now breaking the law is all fine and dandy, let’s all go out and kill a Tory.” Mr Pickles, why is it still there on Twitter, this incitement of violence against, in this case, Tories? I am sure you are equal-opportunity abuse platform providers and I could find such abuse against other political parties. Why does that stuff still have any place on your platform?

Nick Pickles: Thank you for raising this.

Tim Loughton: Again. Nick Pickles: I think the question is whether this is new content or the same content. There is the challenge of people posting this content on an ongoing basis. But you are absolutely right, those tweets sound to me like they break our rules. We should review them. I do not know if they have been reported, but I will certainly look into it.

I do not know if it was me who appeared before the Committee, but since 2017 we have significantly increased the amount of content we find ourselves. It is now about half of all the content we take down. It is not 100%. This is an example of where I will go and speak to my colleagues and ask how it is that this is not being fed into the technology we are using to find the content. We will get it reviewed under our rules. I definitely do not want to give the Committee the impression that this is 100% solved as an issue, and I hope those tweets are not the same tweets that were available.

It also speaks to a much wider political culture problem that those tweets are deemed acceptable, which removing them is not going to solve, but I will make sure that after this Committee I speak to colleagues and get that reviewed urgently.

Q62 Tim Loughton: You can block a hashtag on Twitter, can you not? Nick Pickles: No, because the problem we have is that some people will tweet hashtags to condemn something. Some people will tweet hashtags and say, “Why is this trending?” We can stop a hashtag. We have policies around hashtags that trend. It does not sound to me like this hashtag has trended. It sounds like a few tweets.

Q63 Tim Loughton: I found it very easily, and I found the same tweets we found and reported to you in 2017. I found use of the hashtag again in an inciting, violent manner since, and the answer you have just given is exactly the same answer that your predecessor gave last time round, so why should we have any confidence that you are taking this sort of thing seriously? Even when the Home Affairs Committee raises it with you, nothing is done about it. Can I go back to the Trump issue? I hold absolutely no candle for President Trump, but I don’t comment on your decision to take down his account. If somebody posted, “#Israel is a malignant, cancerous tumour in the west Asian region that has to be removed and eradicated. It is possible and it will happen”, do you think that is encouraging or exhorting people to violence? Nick Pickles: I do not know the context of that tweet. Certainly, we do take—

Q64 Tim Loughton: It does not need a context, does it? I have just told you this tweet says that Israel is a malignant, cancerous tumour and has to

be eradicated. In any context that sounds fairly hateful, doesn’t it? Nick Pickles: As I say, without knowing the account that posted that tweet—one of the things we see, and it is why we have a specific world leader policy, is geopolitical conversation from world leaders who are engaging in very direct geopolitical sabre rattling. I am happy to go and review that tweet. I do not know who posted it, but the—

Q65 Tim Loughton: Let me tell you that the person who posted it is the supreme leader of Iran, and you have done nothing to vet any of his tweets that appear to be exhorting violence against an entire nation that might be said to be on a par with what Donald Trump was doing, less or worse, I don’t know. But it is just double standards, isn’t it? That is why people are taking issue with it. It is double standards, clearly. Nick Pickles: We have a specific rule, and we have had it for several years, that covers geopolitical sabre rattling from world leaders. I had the same conversation, for example, when President Trump spoke about military action against Iran and North Korea. We did not take action on those tweets because we recognised it is important that the engagement of world leaders in speech relating to other states is publicly available and transparent in our rules currently. As recent events have shown, we need to keep these rules under review, but the actions of world leaders are not going to be moderated by companies like Twitter. This is the very essence of the free speech conversation—

Q66 Tim Loughton: Unless it is the former President of the United States. Let me come to you, Mr Slater, and Google and YouTube. Again, partly on the back of revelations in this Committee that your social platforms were hosting stuff that was certainly child sexual exploitation and extremism, you only agreed to review and change your policies because some of your advertisers, major corporates, saw that and decided to vote with their feet and took away advertising from you. Is that what it really takes for you to get serious about some of the hateful violence and sexual exploitation material that you routinely have on some of your posts? Derek Slater: No. Certainly we have standards about advertising on our platforms and so on, but we look holistically at this challenge and think about user safety, societal harm. That has driven a number of changes to policies over the years, such as the changes we made last year to our harassment policies to deal with implied threats and malicious slurs, the changes we made around harmful conspiracy theories. We are thinking about user safety and societal harm in the main.

Q67 Tim Loughton: Let us go back to child sexual exploitation, there is some pretty unpleasant, nasty stuff out there. Do you agree with what the Government are looking at on online harms, that you should have a mandatory responsibility to report it to the law enforcement agencies? Derek Slater: First, child sexual abuse material and exploitation is abhorrent, and we do not tolerate it. We have invested in world-leading technology to root it out, as well as responding to notifications about it.

Where appropriate, in light of the law, we refer it to law enforcement and specifically we refer it to the National Centre for Missing and Exploited Children, the US entity that then co-ordinates with international organisations of that sort.

That system in general continues to work well, but we would be happy to discuss ways it may improve over time. I think we have been constructive in that conversation, through the voluntary principles that your Government and the other Five Eyes Governments worked on, as well as the work of the Home Office on the draft code of practice on child sexual exploitation.

Q68 Tim Loughton: Could you tell us roughly how many suspected child sexual exploiters you have reported to the UK police? Derek Slater: We file reports with NCMEC. I do not have the specific figure for how many reports we made last year, but it is in the hundreds of thousands. We also contribute to industry efforts to support the industry at large in addressing these things, whether that is contributing hashes through NCMEC, our CSAI Match video-matching tool to the industry or our content safety API that has helped people in just the last six months to classify over a billion images.

Q69 Tim Loughton: I understand what action you may be taking, whether or not we agree on how forceful it is in policing your own sites. But I am more interested in where there are very clearly people who fall under criminal activity by definition in the US or the UK, that you feel you should be proactively reporting those people to the UK police, the National Crime Agency or whoever it may be. It would be useful to have a rough figure of the number of cases that have been taken up out of the hundreds of thousands of cases that have been reported and notified to you, of which a minority you have then taken down. If you could provide the Committee with something along those lines, a bit of perspective, that would be helpful. My final question is on the proposals around the responsibility for regulating, and whether the cost of regulating and moderating should come from the social media companies themselves. I would say that should possibly include where you are failing to take down some things, including the things I have mentioned, and it then falls to the police or external bodies to do that. You should still bear the cost of that because you failed to do it. Do you think that is fair, Mr Slater? Derek Slater: I am sorry, I am not sure I understand the question. The cost of what we should be bearing? I am not sure I understand.

Tim Loughton: The Government is suggesting that the regulator should be funded by the industry it is responsible for regulating. Do you agree with that principle? Going further than that, if you fail to take down stuff you should be taking down, for which you may then be subject to fines, do you think you should have to pay the cost of somebody else doing your moderating work for you where it is decided subsequently that it has

to be taken down? You should pay external costs. Derek Slater: Thank you for the clarification. On the structure of the regulator and how it is funded, yes, I think it is something we would want to work on to get the details right, but there is a feasible way forward there. It is important to get the penalty structure right as well to make sure it is proportionate to deal with where the harm is severe.

Our entire business is built around raising up and rewarding high-quality information and either diminishing, removing or reducing illegal, harmful content. If you try to size content moderation, in the last year we spent at least $1 billion on those efforts. We continue to invest, and we will grow that investment over time.

Q70 Tim Loughton: Is that $1 billion you spend on content moderation? Derek Slater: Yes. It is hard to give a precise figure, because our entire business is about access to information so what is in versus out of content moderation is difficult, but we invest there. As well, to quantify your other question—

Q71 Tim Loughton: To put it into context, what is the turnover of Google and your subsidiaries? Derek Slater: I do not have the precise figure in my head, although it is a public number.

Q72 Tim Loughton: It is trillions, isn’t it? You are spending $1 billion. It is quite a big figure, so you are spending only $1 billion on moderating out of a very, very substantially higher figure of your turnover, yes? I think we will take that as a yes. Derek Slater: Our entire business is about investing in addressing these sorts of challenges.

Q73 Stuart C McDonald: One other issue the Committee has repeatedly expressed concern about is how your algorithms promote certain content to users. Repeatedly in preparing for these sessions, Committee staff, depending on what they look at in their initial searches when they go on YouTube or Twitter, are very quickly having promoted to them some pretty awful content—“inappropriate” and “offensive” would be the weakest words we could use for them. That has happened again in preparation for this session. Going on YouTube or Twitter, there has been content promoted to them that could be described as anti-migrant, involving controversial race scientists, anti-Semitic or white supremacist. Mr Pickles and Mr Slater, why are your platforms still promoting that sort of content? Should a new regulator be able to see the algorithms you use and regulate them? Should we have an opt-in system so that content is not suggested to users unless they positively opt into it? Nick Pickles: You highlight the question of algorithm transparency, and I think it is a question of how we do that in an informed way. We have already taken a decision to give you the ability to turn off Twitter’s

ranking algorithm on your home timeline. If you do not want to have an algorithmically ranked timeline and you want to have pure, no algorithm, reverse chronological Twitter, you can do that now. Regulators around the world are currently grappling with companies giving people more control over these algorithms. We wholeheartedly support that.

Q74 Stuart C McDonald: For example, if I go on to the Twitter page of a certain political movement, that is where the Committee staff were finding the controversial pages I have highlighted. What about the algorithms there? Nick Pickles: We have expanded the number of accounts we will not recommend. For example, I mentioned earlier state-affiliated media. This is something we are looking at more and being more aggressive in. You are absolutely right that giving people more control over whether they want these sorts of recommendations is exactly the kind of thing companies should be doing to rebuild trust.

Q75 Stuart C McDonald: What about transparency with the regulator? If the regulator were to say it wants to see these algorithms, would that cause you a problem? Nick Pickles: The question is what is the expected outcome? The challenge is that if you have the algorithm and you do not have the data, then you have code but you might not be able to reproduce what is happening on the platform. That question of just giving someone code might satisfy a notion of transparency but, if it does not inform, empower and enable something, providing code alone will not give you the public policy outcome you want. That is why we think control might be something the regulator would look at as an alternative to just looking at code.

Q76 Stuart C McDonald: What would the regulator need to look at to decide whether or not algorithms on your site are working appropriately? As well as the code, what else would it have to ask for to get a proper understanding of the issue? Nick Pickles: I think this is the question, the underlying data. We have spoken previously about the idea that a lot of the algorithms we use, including in content moderation, are proprietary and the training data, the code underneath them, is all proprietary. Expanding access to the machine learning model, the data that trained those models, is as important as looking at code. Looking at things like the moderation technology we use to detect speech might be where the training data is as important as the code for the regulator to look at.

Q77 Stuart C McDonald: To look back a little bit on the steps you have taken so far, the Committee staff looked at the thread for a group that you can fairly describe as a white nationalist group. But Twitter is then recommending very similar stuff, including another Twitter thread that promotes anti-Semitic conspiracy theories, politics and so on. Why does Twitter think that is acceptable just now?

Nick Pickles: First, we have continued to evolve and expand the rules in this area and we also have taken more aggressive action not to recommend accounts. This is clearly an issue of balance and, if we have not yet found the right balance of which accounts we should not recommend, we will keep looking at that. It is an area where we as a company think that, as well as investing in proactive technology, giving people control and choice in the long run is as important and means that people can make the decision of whether they want these recommendations or not. Q78 Stuart C McDonald: Mr Slater, what are your thoughts on this? Why was it that the staff logged into YouTube and had content recommended that could be described as anti-migrant and so-called race scientists? Why is that happening now, and what are the answers the regulator should be looking at? Derek Slater: I echo a number of points Mr Pickles made. We use a number of different levers to deal with information, raising up quality sources of information, authoritative sources of information, and not only removing violative content but reducing or not recommending content that brushes up against those lines. We have made dozens of changes on how our recommendation systems work on YouTube over the last few years that resulted in a significant 70% reduction in views of those not recommended videos.

We will continue to look very carefully at that, and to work with you and other stakeholders to figure out ways to give users both control and choice. We continue to improve those. You can indicate, “Do not recommend me this channel again.” You can delete your watch history and so on, and we are providing transparency to do that in a way that is, as Mr Pickles said, informative, to have a constructive conversation and, importantly, does not enable bad actors to game the system and get around the measures we have taken.

Q79 Stuart C McDonald: My other question, briefly, is another AI challenge illustrated by the horrible attack in Christchurch. The challenge is faced in trying to stop that violent content being uploaded again and again. I think only a couple of hundred people viewed the live feed, but millions ultimately saw it. People were making slight changes to how they were doing it, and that meant it was not picked up and so on. Is it just beyond your organisations to be able to implement algorithms or schemes that will stop that happening, or is progress being made? How do you go about making sure banned users do not set up new accounts and simply reappear? Derek Slater: I apologise as your audio cut out a little bit but, if I heard it correctly, it was about how we are dealing with Christchurch and how we deal with re-uploads of content and new accounts being started. First, we continue to improve on our automated detection systems over time. If you look at YouTube’s community guidelines enforcement report, over

90% of the things we removed were first detected by machines. The vast majority of those, 80%, were removed before 10 views.

On our additional limitations and qualifications that you need to meet to live stream on the platform, certainly the Christchurch event was horrific and led us to take a number of different steps as a company in restricting mobile uploads and how we tuned our machine systems, but also as an industry. We signed on to the Christchurch Call to Action along with other Governments and stakeholders. We have worked through the Global Internet Forum to Counter Terrorism to improve how we deal with those sorts of perpetrator-filmed violent events through our content internet protocol, which we have since used in two different situations and we will continue to improve over time. I am happy to provide more detail if you are interested.

Q80 Stuart C McDonald: What about banned users setting up new accounts? How do you stop that happening? Derek Slater: We also use automated detection systems to see if they are trying to game the system. We might look at IP addresses or other sorts of metadata in that regard. I would be happy to go into further detail, mindful of not wanting to give the bad actors a road map to what we look at.

Q81 Stuart C McDonald: Mr Pickles, is this a challenge that is going to be incredibly hard to overcome? How far away are we from being able to prevent something like Christchurch, god forbid, being repeated? Nick Pickles: We have been grateful for the leadership of the New Zealand Government and the Christchurch Call to Action in helping to address some of these problems. Christchurch had two unprecedented challenges. First, the number of people who were maliciously, deliberately editing the video to try to evade our detection, so we were seeing new versions of the video every 60 seconds. I think the final tally of the number of unique copies runs into the tens of thousands, which we have not seen in another area.

Secondly, people uploaded the content to condemn it and call it out, people were challenging the information around this event. On Twitter we did some analysis: about 70% of the people who saw the Christchurch video on Twitter saw it because it had been posted by a verified account, and that included very high-profile news organisations and some celebrities who were posting this as part of their editorial coverage or to condemn it.

We are taking those accounts down, but people adding split-screen where they were speaking over it, and praying in some cases, was something we had not seen before, and it does make the detection challenge much harder. That is why we now share information faster between the industry to catch this.

On the banned accounts question, it is very similar. We use the same sorts of signals as we collect when people create accounts. We can compare them with previous accounts. There are two types of people: there are people who get caught by that, and there are very bad actors who deliberately try to evade and mask their identity. In those cases, it is harder and we rely on a mix of technology and user reports to identify those people, but to make that system fool-proof the amount of information you would have to collect is not feasible. I do not think it would be appropriate under GDPR, so it is a big challenge technologically, but we continue to look at it and collaborate as an industry.

Q82 Dehenna Davison: Hopefully you can all hear me, and my apologies if I go pretty quickly because the Division Bell is going and I do not want to deafen you. Mr Bertram, I want to focus on you like a laser beam. I think I am one of the few MPs who has used TikTok as a platform in my work, so I am very keen to ask some questions of you. We know that many users on TikTok are under the age of 24, and a recent Ofcom report found that over half of all eight to 15-year-olds use your platform despite the minimum age to access it being 13. Given there are so many children under the age of 13, would you say your age verification procedures are up to scratch? Theo Bertram: Our age verification processes are in line with the industry standard in that the user has to put in their date of birth before they use it. Once they are on the app, anytime an account is flagged for any reason, the moderator examines that account to see if it comes from someone under 13. We want to remove those under-13s from the account, and we do that routinely.

Critical to tackling child safety is not just age verification, and the UK is driving ahead of others on age assurance, the age appropriate code. We have already put in place a number of steps to protect under-16 users, thinking about the different profile of risk that they face on our platforms.

Under-16s are already not able to have direct messaging, they are not able to create livestreams, you cannot download a video from an under- 16-year-old. Last week we announced more measures globally, which include that all video creation by an under-16 is set to private. Only approved followers will be able to see that video, and there will be no comments from strangers—you would only have comments from those that you have approved. We would not allow duet or stitching. I am sure you will know what that means, but for the rest of the Committee it is one of the ways in which users on TikTok engage by creating videos on the back of others or splicing them.

Those measures are designed to reduce the risk of grooming and bullying. Baroness Kidron, with whom I am sure this Committee is familiar, praised that step last week and so did the NSPCC.

Q83 Dehenna Davison: I appreciate the steps that you have taken, particularly the new measures brought into place this month, and you are

definitely leading the way in some regards. The main concern is that so many people under the age of 13 are able to get on to the platform in the first place, particularly in the context of child sexual exploitation, which obviously for all of us is a huge area of concern. With TikTok having such a young user base, as a social media platform it has a particular appeal for anyone with any sort of perverse sexual interest in children. I want to ask a little bit about the algorithms you use, because I believe you gave some evidence recently talking about how you believe that your algorithms are a little more sophisticated than perhaps those who were in the first wave of social media platforms. Could you touch on that a little bit? In your earlier evidence you were talking about carrots and meals and colours, and I was a bit confused. If you could give a bit of clarity, I would appreciate that. Theo Bertram: Yes, I am not going to say that we are better than the others, but I would say we are different in the way that the recommendation engine works. Part of that is driven by the fact that we have very short videos of 15, 30 seconds. If you compared the experience of watching YouTube for half an hour, someone would watch many more videos on TikTok than they would on YouTube. The way that our algorithm works is to diversify the range of interests that a user has as they are watching content.

We are not sending you the same content that we think you would like. We are constantly trying to explore the full range of content that could possibly interest you. We are always diversifying. That is one reason why, without being flippant, the trending video at the moment is sea shanties. I do not think anybody is super-interested in sea shanties, or thought they were interested in sea shanties, but it is the kind of thing that pops up and there is a serendipitous moment where you think, “Oh, that is quite interesting” and we take that as a signal that you want to see more of that. We are trying to diversify the indicators, the signals, for what you would be interested in, and that is how our algorithm works.

Q84 Dehenna Davison: Isn’t there still a danger that the algorithms would still direct anyone using the platform, with interests in grooming children that are frankly illegal, to similar content and perhaps, if they were to attempt to groom one of the children on your network, it would open them up to a much greater range of victims? Theo Bertram: We are probably the most hostile platform for someone seeking to do child grooming. They cannot direct message. No under-16 can direct message. No one at all can direct messages with images or documents, so you are not able to share that. There is no livestreaming for an under-16, and there are no comments allowed on an under-16’s video. You cannot download an under-16’s video. We think we are designing quite a hostile place for someone seeking to do that.

Q85 Dehenna Davison: Yet if there are still children under the age of 13 accessing your platform, I suppose there is still a very good chance that

children who are between the ages of 13 and 16 could perhaps be a little untruthful about their age to access some of those features. Is that something you are concerned about or taking any future action on? Theo Bertram: I think it is a challenge for all platforms. As one of the other witnesses said earlier, I don’t think there is a solution that anyone has come up with yet for accurate age verification of a 13-year-old. Whenever we find someone on the platform that we believe to be under 13, we will remove them and we do that because we are not a platform for under-13s. I think that is right for their protection. We also think we have a duty to protect all of the children that we have on the platform. By that I mean those who are aged 13 to 16 in particular.

Q86 Dehenna Davison: One more question in this vein. Do you think it is appropriate that if one of your users is found guilty of sending messages to children that they should not be sending, which have a particularly explicit content, that they face only a one-week ban? Theo Bertram: No, absolutely not. This is not an accurate report, and I know the article that you are referring to. It told some of the story, but by no means did it tell the whole story. When we believe an account is infringing, we suspend it at that point while we carry out the review. When an account has been reported for child abuse content—and on our app we make it very easy to report child abuse content so that, when you see the video, hold your thumb down, it will come up and say “Flag” and you hold your thumb down and you can click “Child abuse content”—it is cued for review and action will be taken against it. The initial step is a suspension while that review is taking place. Any child abuse content would not only result in a ban, but we would provide that information to law enforcement.

The report took the first part of that but did not follow up with the second, which was unfortunate.

Q87 Dehenna Davison: That is very reassuring to hear, and thank you for providing some clarity on that. To confirm, if you find a case of child sexual exploitation through your platforms, do 100% of those cases get reported to the law enforcement agencies? Theo Bertram: Where we believe that the report from a user is valid. As one of the other witnesses was saying, obviously you get false reports from users about child abuse content. Law enforcement will tell you—and we work with them very closely here in the UK—that they want to make sure that when we are flagging content to them or to NCMEC, the organisation in the US, that we are doing so with a high rate of accuracy. All we do is make sure that we think the report is reasonable, and then we will pass that along.

Q88 Dehenna Davison: Thank you. One final question on a slightly different vein. I know back in September there was an horrific incident with a

video on your platform with a very graphic suicide displayed that was then shared and shared and shared. If reports are to be believed, it even made the “For You” page at one point, so it definitely slipped through the net there. I know action was taken very quickly. Following that, I understand that your acting head wrote to other social media companies to seek a memorandum of understanding. What came of that? Has there been any progress in that regard? Theo Bertram: Yes, there has been some progress and we are working with industry. We are not quite ready to give an update on that, but as soon as we are I would be happy to write to the Committee, as I have also promised the Chair of the DCMS Committee, and we will give you an update. We are working on this.

Q89 Chair: A few follow-up questions from me. If I could go back to Mr Slater, were you saying that you have made big changes to your algorithm, in response to Stuart McDonald’s questions? Derek Slater: I am sorry, going forward or in the past?

Chair: Either. Derek Slater: Certainly over the last few years we have made dozens of changes to YouTube’s recommendation algorithm, which have led to significant reductions in the viewership for that borderline content that is not removed from the platform but does brush up against the line that may include harmful misinformation or other subject matters.

Q90 Chair: Do you think that your arrangements are now satisfactory? Derek Slater: I think we always have to continue to improve. This is a space where the threats are always changing and evolving. Bad actors are changing their habits, so we have to be ever vigilant. We will not rest on our laurels at all here.

Q91 Chair: It would be very helpful if you could write and give us more description of the way in which you have changed your algorithms. As Stuart said, it is something that we have raised repeatedly. I would also like to raise the issue about co-operation with police forces. TikTok, is it right that you charge the police for help with their requests? Theo Bertram: Not as far as I am aware, no, not at all. I think we would be similar to other companies. The only way in which we are different is that our UK entity provides access to the law enforcement in the UK, so there is no need for the MLAT process, which I think you are familiar with. We have a good relationship with UK law enforcement; our global law enforcement is led from Dublin. UK law enforcement is led by a guy who is from the National Crime Agency and, before that, Durham Police. There is good engagement there. I will definitely write to confirm, but I am pretty sure we do not charge them.

Q92 Chair: There are no charges to the police either for material for investigations or for any material that might be needed for court?

Theo Bertram: Not as far as I am aware. I have not come across that at all but I will check with the team and write to you to follow that up. We have a very good working relationship with law enforcement in the UK.

Q93 Chair: Snap Inc., can I just ask you the same question, Mr Turnbull? Henry Turnbull: No, that is not the case. We have well-established processes for supporting law enforcement investigations in the UK and internationally. I have never heard of any instances—

Q94 Chair: There are no charges. Mr Pickles, does Twitter ever charge the police for any investigations that Twitter does? Nick Pickles: Not that I am aware of, and I echo the similar comments from others.

Q95 Chair: Mr Slater, any charges from YouTube or Google to the police for any investigations? Derek Slater: Broadly similar, I am not familiar with any charges in this area, but I would be happy to follow up.

Q96 Chair: Thank you. Mr Turnbull, could I raise with you a case that was raised with me? It is a revenge porn case where the police approached Snapchat but, by the time Snapchat responded, the material had all been deleted. Even though it had sat on Snapchat for some time, wasn’t initially deleted immediately, had been seen by a range of people, there could be no prosecution and no action taken against the perpetrator in that case. How do you justify long delays in responding to the police if you delete your material so quickly? Henry Turnbull: I don’t think that Telegraph story was fully accurate. I should say up front that we fully realise how distressing that situation was for the victim and her desire for justice, as outlined in that case.

We are confident that we co-operated with law enforcement in this case in a way that is consistent with our policies. We are able to provide information in response to law enforcement requests. I think one of the lessons learned from this is the importance of law enforcement submitting a preservation request, which they are able to do when there is any indication of criminal content on Snapchat. That basically allows us to keep a snapshot in time of a user’s data, including basic subscriber information, for 90 days and allows us plenty of time to process any requests. You can also extend it for an additional 90 days. The real takeaway from this for us is how critical it is for law enforcement to submit preservation requests given issues around the ephemerality of content on Snapchat. We will be working with the Home Office to ensure that is well understood by UK law enforcement agencies.

Q97 Chair: Can you respond to preservation requests immediately? Henry Turnbull: We can certainly respond very quickly. The standard turnaround time for that is seven days.

Q98 Chair: When material might be deleted within 30 days or less, seven days is quite a long time for those preservation requests. Henry Turnbull: As I say, you can extend that preservation request further for an additional 90 days. Once we have had the preservation request, we can ensure that snapshot in time is saved and there is also plenty of time for that investigation to conclude and to extend further in the very rare instances that it is necessary.

Q99 Chair: If you got a request today and the material would otherwise disappear on Friday, what happens if you take seven days to respond to that request? Isn’t there a risk that the material is gone? Henry Turnbull: First, in most of these cases we are not providing content, we are providing account data related to the account that law enforcement is looking to. We have different response times for different types of requests. If there is an imminent threat to life situation, we can respond within an hour. Requests regarding serious crimes we can respond to within 24 hours, and then all other preservation requests we aim to respond to within seven days. We can act on those quite quickly, but we are not providing content in most of those cases. We have well- established processes to seek data relating to accounts that law enforcement are investigating.

Q100 Chair: If there was a request around a revenge porn case, would you not expect to provide any content? Henry Turnbull: We do not typically provide content.

Q101 Chair: The question here is whether a crime has been committed or not, and what the evidence is of a crime having been committed. If someone’s experience is that there have been explicit photographs or videos of them that have been shared widely, even if they are then deleted, a lot of people may have seen them, which can be deeply distressing for someone. For the police to be able to put a charge through court, they need some evidence that a crime has taken place. Are you saying that, even in those circumstances, you would not expect to provide any content to the police? Henry Turnbull: Currently there isn’t a legal framework that enables us to share content with UK law enforcement. That is going to change with the implementation of the UK-US bilateral agreement, but we do not tend to store content further than 24 hours, as you say.

We have processes to supply law enforcement with the information they need to support investigations. This works across a wide range of areas, including revenge porn, and getting access to that account information to support their wider investigations enables the police to take action.

Chair: Thank you very much. We need to move on to our second panel now. Thank you to everyone for your time. There are two follow-up issues that we would like to deal with with all of you. It is a question I

have not had time to ask about the number of people that you currently have involved in content moderation, and also the number of people you have directly working for your organisations that are involved in content moderation and the number that you have contracted out to other agencies and organisations. That would really helpful. The second issue is that we have some individual cases of which we have not wanted to raise the full details in an open session. They are individual cases around child sexual exploitation and white supremacism that we would be grateful for your responses on as well. Thank you very much for your time and for your patience in what has been a long session. We are very grateful for your time. Examination of Witnesses

Witnesses: Monika Bickert and Niamh Sweeney.

Q102 Chair: We are now turning to the Facebook Group to cover issues around Facebook, Instagram and WhatsApp. Joining us we have Monika Bickert from Facebook and Niamh Sweeney from WhatsApp. Thank you very much for time, we are very grateful, and for your patience in starting this evidence session, as I recognise the first panel has gone on longer. I will begin with the question that I asked the other panel. On a day when there has been such concern about whether the inauguration might be disrupted, where there have been 25,000 national guard troops involved in defending the Capitol after we saw such awful events and the assault on American democracy, how far do you feel that Facebook shares some responsibility in providing platforms that have enabled the escalation of hatred and incitement to violence? Monika Bickert: Of course we know that any technology platform can be abused and, in fact, we see people trying to share hate speech or trying to incite violence on our platform. We are very aggressive in trying to identify and remove that, and we are transparent about how we do it. We publish these numbers every quarter showing exactly what we see.

In the run-up to the US presidential election and carrying through to today’s inauguration and transfer of power, we have taken some unprecedented steps. I will point out a few of those, if I may.

One is that we have worked around the clock, we have a 24-hour operation centre, where we are looking for content that could incite violence or content from groups that we call militarised social movements. Basically these are groups of citizens who might use malicious language or encourage people to bring weapons to events. We are looking for that content and we have, since we put that policy in place this summer, removed 890 such movements, which has led to a corresponding removal of 30,000 pages, groups and events.

We have taken a number of actions in this space, and we will get into that later, but just to be mindful of time I point to that in the run-up to

the inauguration today we removed any events trying to co-ordinate people getting together in DC or in other state capitals. That is quite sweeping, if you think about it. A number of those events might be people who are organising for completely peaceful reasons but, out of an abundance of caution, we have taken this unprecedented step to ensure that our services are not abused or attempted to be used for bad purposes.

Niamh Sweeney: Co-ordinating harm of any kind is prohibited on WhatsApp. We haven’t seen any evidence of that at this point. We receive tips from Facebook and the very public-facing nature of trying to co-ordinate harm, the requirement to allow people to attempt to do so, means it is easier and more in Facebook’s remit to look for that content. If they find it or see any evidence of it moving to WhatsApp, they will give us information and we investigate. As I said, we have not seen any evidence of an attempt to co-ordinate harm on this occasion, but we continue to look at the information that is provided.

Q103 Chair: Should you have been doing some of the work that Facebook has been doing in the last few weeks much earlier? Do you take some responsibility for the fact that some of these organisations have been able to proliferate on Facebook for a very, very long time? Monika Bickert: Of course we are always evolving in understanding which groups are abusive. I used to be in Government for a number of years, and I am familiar with how that process works in designating prohibited organisations, at least in the United States

On the technology company side, there are some challenges and some benefits. One is that we can move faster sometimes than a Government process might. For instance, we are able to designate some of these militia groups and other dangerous organisations quicker than a Government could. We also have some limitations. We can only see what the group is showing us about its speech and its aims. We have been very vigilant over the past year, and before that too but especially over the past year, in trying to get ahead of these groups and stop them early. That is why we put the policy in place with the military social movements that I mentioned. Before that we had a longstanding policy against white nationalist or white supremacist groups, and we have designated over 250 such groups and aggressively removed their content.

Finally, we have worked with academics and safety experts to understand trends like QAnon and conspiracy networks that might try to incite violence. We put in place, over the summer, very aggressive policies against groups like that. We have now removed over 40,000 groups, pages, events and Instagram accounts for violating our QAnon policies. This is something we take seriously.

I also want to point out that in the weeks running up to the 6 January violence at the Capitol we had teams that were focused on understanding what was being planned and if it could be something that would turn into

violence. We were in touch with law enforcement, we were responding to their requests beforehand and we continued to do so afterward. We have provided information to law enforcement, both in response to their requests and proactively when we have come upon it in our own investigations.

Q104 Chair: Why did the Tech Transparency Project say in May of last year that white supremacist groups are thriving on Facebook, that half of designated US white supremacist organisations had a presence on Facebook and that the Facebook system of related pages was often directing users to visit other extremist and far-right content? Were they just wrong? Monika Bickert: I think we are very aggressive at identifying and removing white supremacist and white nationalist organisations. As I said, there are literally hundreds we designated. We don’t just do this ourselves. The people who run that team—and this is a team that sits under me, our dangerous organisations team—have literally spent their careers in understanding radicalisation and terrorism, and they maintain a network of experts outside of the company that they also work with.

Q105 Chair: Is the Tech Transparency Project wrong? Monika Bickert: I can’t speak to why they have the opinions they have. I can just tell you what we do.

Q106 Chair: What they said was that half of designated US white supremacist organisations have a presence on Facebook. Is that correct? Monika Bickert: We do not allow any white supremacist or white nationalist organisations on Facebook. We don’t allow any.

Q107 Chair: So they are wrong? Monika Bickert: If they are saying that we allow those groups, we do not allow those groups. I would also suggest that there are academics and experts that we work with regularly who know our approach and help us identify these sorts of groups. I am happy to follow up with that sort of information.

Q108 Ms Diane Abbott: In response to a question from the Chair, when told that a reputable organisation says that half of white nationalist groups have a presence on Facebook, the representative from Facebook said, “That is not true.” How do you know it is not true? Monika Bickert: What I said is that we do not allow any such organisations. Any time that we become aware of a white supremacist or white nationalist group, we remove them and we go further than that. We do not just remove them from our platform, we do not allow anybody to praise them or support them. If you were to name a white separatist, nationalist or supremacist group and say, “This group is wonderful,” we would remove that even if the person is not a member of the group.

Q109 Ms Diane Abbott: The Committee understands that. The Committee understands that you would not knowingly allow a white nationalist group on Facebook. The fact that so many of them flourish on Facebook must surely mean that you are not proactive enough in taking down that sort of presence. It is one thing to say that you would not knowingly have them on Facebook, but if half of white nationalist groups have a presence on Facebook—and we have seen the horrific scenes in the Capitol because of white nationalist groups—it seems to me that you are extraordinarily passive. Monika Bickert: We are not passive. We are very aggressive in proactively searching for this kind of content. We put out a report every quarter that details exactly how proactive we are. In fact, keeping in mind that we remove any such content that is reported to us, I think it says something that, if you look at our published numbers, around 99% of the content that we remove for violating our terrorism or dangerous organisation policies is content that we find ourselves before anybody has reported it to us. That speaks to our proactive measures.

Q110 Ms Diane Abbott: As a Committee we have to look at outturns, and the outturn in relation to Facebook and white nationalists is a very poor outturn and tends to suggest that, however proactive you think you are, you are not proactive enough. White nationalists are a threat to society, a threat to public order and, of course, a threat to Jewish people, people of colour and other minorities. Let’s move on now to President Trump’s accounts. Who took the decision to impose a block on President Trump’s Instagram and Facebook accounts? Monika Bickert: That was a company decision. I was involved because I am responsible for our content policies, but our senior leadership was also involved.

Q111 Ms Diane Abbott: I understand it was a company decision. We read that in the papers, but who would have been the individual that finally had sign off on the decision? Monika Bickert: Again, this is a decision that was made at the leadership level. I am involved because I am responsible for our content policies, but when we are making very significant content policy decisions it is not just me making them, it is not one person. We have senior leadership involved.

Q112 Ms Diane Abbott: There is no individual in Facebook who ultimately takes responsibility for these decisions? No individual at all, it is just a generic company decision? Monika Bickert: Of course Mark Zuckerberg is our CEO and, as with every company decision, he is ultimately responsible for the company. When we make content policy decisions, the vast majority of those are made by our content moderators. Some of those come up to members of my direct staff, a very small number of those come up to me and then

the most significant of those are made with input from and collaboration with senior leadership as well.

Q113 Ms Diane Abbott: Did the decision on ex-President Trump’s accounts come up to you? Monika Bickert: Yes. This was a decision made with the involvement of the senior leadership of the company.

Q114 Ms Diane Abbott: It came up to you, and then who exactly did you pass it on to? Monika Bickert: What I can tell you is that this is a decision that was made in accordance with our policies. I was involved, I was not the only decision-maker. This was a decision that involved our senior leadership as well.

Q115 Ms Diane Abbott: Thank you for that answer, but you do leave the Committee thinking that your unwillingness to name an individual, or two or three individuals who take responsibility for the decision, means you are trying to hide from accountability. Most big companies, when it is something so serious, could in the end point you to the person who took responsibility. It seems to me that Facebook, at the top of the company, is trying to evade responsibility in these very dark and difficult times. We have just seen a highly militarised inauguration of the President of the United States precisely because of the kind of horrors that emerge from these white nationalist groups on Facebook. There is a difference between politicians expressing their views, which people may or may not like, and incitement to violence. Are you able to tell me how you draw that line, or even who draws that line? Monika Bickert: Yes, we have published our standards on what is acceptable behaviour on our platform. When it comes to what I would call organic content—this is what a user might post on his or her profile or page—our rules are called our community standards. That is public, as are the details of those standards, and one of the things that you will see is that we do not allow celebration of acts of violence. If there is some sort of violent attack and somebody says, “I am glad that happened” or “that is wonderful,” that violates our policies. We removed President Trump’s content for violating that policy. That was both a video and a text post. That is just a consequence of our community standards, which are public and apply to everybody.

Ms Diane Abbott: I think people listening to you will think that the fact Facebook cannot say who ultimately took responsibility for the decision in relation to ex-President Trump’s activities on Facebook, the fact that you cannot give me a person that takes responsibility, is quite telling. Thank you very much. Q116 Simon Fell: I have a couple of brief questions. Following on from Diane Abbott, do the recent decisions to take action against the former President impact on the assertion of companies like yours that you are

platforms not publishers? Monika Bickert: No, they don’t. The removals here were us enforcing our publicly stated policies. We have this set of rules or community standards that we have had since I joined the company, which was probably nine years ago. They make it clear that if you come to our services this is what you can expect to see and this is what we will not tolerate in behaviour and content. Nothing changes from us enforcing those policies; we enforce these rules against everybody.

Q117 Simon Fell: The reason I ask is that I am looking at the quote that your chief executive, Mark Zuckerberg, gave on this where he specifically cites the context being fundamentally different between what was shared by the former President. In my view, that is editorialising surely. That must push you into the publishing bracket. I assume you do not agree with that assertion. Monika Bickert: I want to make sure I understand the question. You mean do Mark’s public comments push us into that realm?

Simon Fell: Yes. Monika Bickert: We care a lot about people understanding why we make the decisions that we make, and we have tried over the years—in part in response to Government and public feedback—to give more insight into how we make those decisions. No, I don’t think that makes us an editor. We are just trying to give people transparency into how our rules work on our site.

Q118 Simon Fell: I am interested because, when we had some of the other social media companies in the previous session, they were certainly open to the idea that the relationship is changing as time goes on, as some of these issues are aired on your platforms and as decisions are made as to what is allowed to be shared and what is not. I will follow on this line of thinking, and then I will pass on to another Committee member. I am interested in how you manage the reporting conditions and the discussions that go on in your forums around legal proceedings. I know of two or three forums where there are active discussions going on about cases that are before the court. This obviously puts those trials under a very difficult light where you risk the viability of prosecutions, the anonymity of certain people who are involved and, indeed, to an extent sometimes the ability to execute a fair trial. I am interested in how you look at issues like that and decide where the bounds are being pushed too far and you try to rein them in, or whether you think this is a fair discussion. Monika Bickert: We take significant direction from the court orders themselves. This is a very timely topic. We published in November new language in our community standards that makes public some longstanding policies that basically require us to have additional context before we can enforce them. The rules that we have long had on the site

are the ones that we enforce at scale. They are the ones that we can look at the content and just enforce them.

What we published in November are the rules that apply when we have additional information, say from a court, telling us that this is an informant’s name or this is a victim’s name or this is a party whose name is not public at this point, or this is something that is sealed information at that point. If we have that context we take steps, including sometimes proactive steps, to make sure that people are not sharing that content on the service. As I am sure you can appreciate, that can be a challenge for us. There are so many different legal proceedings going on, but we do have those communications through our legal team.

Q119 Ruth Edwards: I want to ask some questions around people smuggling, both into Europe and from continental Europe into the UK. The National Crime Agency told us that it found organised crime groups had been using social media platforms—they specifically mentioned Facebook, WhatsApp and Instagram—to organise the people smuggling and to advertise it. What evidence have you seen of this on your platforms, and what policies and procedures do you have in place to prevent it? Monika Bickert: This is something that we have seen people try to do on Facebook and Instagram, and we have longstanding policies against it. I will speak a little bit about the policy and some of the ways we try to enforce it. I was a criminal prosecutor for more than a decade, and I worked on a lot of child safety cases but also human trafficking and human smuggling cases. One of the things I know from my own background is that, although the two crimes are very different, when you look at how they manifest themselves as a bad actor trying to recruit somebody or facilitate the crime, they can look the same. The would-be trafficker will often try to masquerade as providing some sort of smuggling services.

When we craft our policy lines we don’t allow any of these: smuggling, facilitation, recruitment, planning, evidence of harbouring, anything pertaining to the organising or facilitation of smuggling, nor do we allow the same with human trafficking. Our definitions there come in large part from international instruments like the Palermo protocol. We also look at relevant laws and we consult with anti-human trafficking, migration and safety organisations to try to understand not only the nuances of where we should draw that line but also what the trends are in specific regions. As you know, these crimes are often the result of situations on the ground that could be instability or lack of economic opportunity but groups like, say, IOM or Human Rights Watch often have an early understanding of where these issues are likely to manifest. We have regular dialogue with them, and that helps us make sure that we are tracking, identifying and removing content that violates our policies.

I want to be clear, our enforcement is not perfect. This is an adversarial space. These actors use coded language. They will do things to try to evade us, but largely this is a business that depends on getting the word

out there that you are trying to recruit people, trying to attract people, and that is the sort of thing we can go after. Of course, if we remove it and see an imminent threat to life, we would proactively refer that to law enforcement.

Q120 Ruth Edwards: One of the things that worries me is that the National Crime Agency told us last September that, of the roughly just over 1,200 pages they had reported to social media companies, only about 40% of these had been removed and the rest remained online and people could access them. The reason given for them not being removed was that they did not violate the policy of the different platforms. One of them was Facebook. I find that very puzzling. What explanation can you give for them not violating Facebook’s user terms? Monika Bickert: First, I have not seen that specific content. I am happy to follow up on that and we can walk through it. We have a very productive relationship with the NCA in this area. Not only have they participated in our training that we offer our content moderators and policy team to understand the issues and trends—I think they have done that a couple of times over the years—but they also have provided information to us that has allowed us to remove human smuggling content, and we have provided information to them. Last December I know that we referred a sex trafficking organisation that we had discovered and sent that over to them as well.

That is a productive relationship. Again, I have not seen that content, but I will say that, just as I have had conversations with law enforcement over the years, one area of our policy where I have received questions before is what we do with content where a person says they are seeking to be smuggled, or somebody posts information about how to leave a country. There is no sign of smuggling, there is no, “I can get you out of the country” or offer for services, but if somebody says, “Here is how to get out of the country” we allow that. To be clear, we allow somebody to say they want to get out of the country and we allow information that is posted saying, “Here is how to get out of the country,” so long as there is no evidence that that is part of a smuggling offer or somebody trying to facilitate smuggling.

Let me be clear about why we allow that. When we talk to safety groups and those who are focused on safe migration specifically, they have told us that it is important to allow that content because it could help save a life. This is consultation that we continue to do. Since 2018 we have talked to more than 40 organisations around the world, and certainly including in the UK, to make sure that we understand if our line is in the right place. The input we have is that this line is in the right place. We will still continue to remove any type of facilitation or organisation or attempts to offer smuggling services, but we will still allow people to say that they want to get out of the country.

Q121 Ruth Edwards: Does it depend at all as to which country the person posting is based in? I could understand that argument for a number of

countries around the world, for example somebody posting to get out of any country where there is an authoritarian regime. If we look at the issue of Channel crossings, I can’t see any sensible reason for allowing posts about tips to help people smuggle themselves out of France and to put themselves in real danger by getting in a small dinghy and going across the Channel. We have seen a number of people tragically lose their lives in the Channel. Surely that kind of thing would violate your rules around preventing potential offline harm that may be related to content on Facebook. Does the geography make any difference? Monika Bickert: It makes a difference to our understanding how to identify and recognise the smuggling or trafficking activity, but we still would allow somebody to ask how they can leave a country, regardless of which country it is. This is based on our consultation with safety groups and from a company standpoint it is hard for us to know what the circumstances are that are causing somebody to feel desperate to leave. It could be that they are in a domestic servitude situation or an abusive situation where they feel like the safest thing for them, even accepting it has serious risks, is to try to flee a country.

Q122 Ruth Edwards: I see what you mean. It is interesting though, because when we spoke to the National Crime Agency they were very frustrated that 40% of the pages they had asked to be taken down have not been taken down. It seems odd, if that was the reason why they were not taken down, they would be frustrated about that. Could you look into requests from the National Crime Agency that have not resulted in a takedown from Facebook and write to the Committee to let us know the reasons behind that? It would be very useful for us to see that. Monika Bickert: Yes, I am happy to do that.

Q123 Chair: I will follow up some of those questions with Niamh Sweeney. We heard from the NCA and others about WhatsApp often being used as the referral point. For adverts that might start on Facebook, the referral point is to contact someone on WhatsApp to organise or to connect with the gangs involved. What are you doing to prevent that? Niamh Sweeney: Both smuggling and trafficking are prohibited under our terms and, where we see evidence of it, we will move to remove the accounts that are facilitating or organising the crime. Where there is an imminent threat of harm, we also make a referral to law enforcement.

Much more common, however, is where law enforcement will come to us and ask for our assistance in investigating instances of either smuggling or trafficking. That could be to identify the location of a facilitator or a victim. That has happened with UK law enforcement in recent months. We had cases of that in 2020.

The business model behind these types of crime requires a certain level of visibility, and that is one of the key differences between the public- facing platforms like Facebook, Instagram, Twitter, Snap, TikTok and a private messaging service like WhatsApp comes into play. To state the

obvious, WhatsApp, as a private messaging service, does not have the same functionality. There is no searchability, there is no discoverability, so unless you have somebody’s number saved in your phone, you cannot reach out to them on WhatsApp. You cannot search for certain types of content and we are not offering you certain types of content—issues that were highlighted by some of the members earlier where algorithms are in play.

Q124 Chair: They are using you for a different bit of the criminal process. Niamh Sweeney: Yes, what I am trying to get at there is that—

Chair: It is one bit, but you are still being used for crime. What are you doing to prevent that? Niamh Sweeney: I understand. What I was getting at with the visibility issue is that the much more effective way to get at this problem is to cut them off upstream. We take information from Facebook and other third parties and look to see if there is evidence of the crime happening on our platform.

Q125 Chair: How would you tell whether there is any evidence of it happening if you have end-to-end encryption? Niamh Sweeney: We do not have access to the messages, but we do have access to user reports, so we can see evidence that way. But again, much more common is where law enforcement is conducting an investigation and comes to us and asks for our assistance, and we believe in supporting the work of law enforcement in tackling these crimes.

Q126 Chair: But you cannot tell law enforcement whether a crime has taken place. What if it is child abuse images, for example? Are you able to identify child abuse images, perhaps the Internet Watch Foundation has identified images, the hash images and so on? Are you able to identify those on WhatsApp? Niamh Sweeney: We ban about 250,000 accounts every month for participating in groups that are sharing child exploitation imagery. You are right in saying that we are fully end-to-end encrypted, as is the industry standard now in private messaging, but we have what we refer to internally within the company as unencrypted surfaces within the app. They will include things like your profile photo, a group photo, group name, group description and some other elements. We use all of the available unencrypted information specifically to go after child exploitation imagery and terrorism.

In the child exploitation imagery example, we use photo and video matching technology against all of the unencrypted surfaces, so profile and group photos. We use a keyword-based classifier all group names and group descriptions. Through that we surface many groups participating in this kind of behaviour. We will ban all the members of those groups, but we also make many hundreds of thousands of referrals

to NCMEC every year where we send violative content that has been identified either through—sorry, I should mention we also receive many user reports and that often surfaces this content, too.

Q127 Chair: Can you just confirm that figure again, the number you are currently banning a month? Niamh Sweeney: Over 250,000. That does not translate into 250,000 referrals to NCMEC because there would not be evidence against each of those accounts. However, because we are pretty certain that they are participating in groups where this is happening, they are banned from the account. But for anyone where we have evidence that they shared the content we make the onward referral to NCMEC.

Q128 Chair: There are 250,000 cases a month where the evidence is based on somebody posting an image in their profile, which somehow indicates that child sexual exploitation is taking place, or in the title of the group that indicates it is taking place, or an individual member reports it. However, if that, just on the name, the profile picture and the reports, identifies 250,000 a month, how many examples of child sexual exploitation must be happening on WhatsApp where the content is in the WhatsApp stream itself and is not identified in the profile? Niamh Sweeney: People need to identify each other to exchange material. That is why there will usually be some sort of evidence available on the unencrypted surface that will help us to identify them. But encryption is the industry standard now in messaging. The key thing that sets us apart is our commitment to working with law enforcement on these issues.

Q129 Chair: What is your estimate then? If you have an estimate of 250,000 cases where somebody is obvious enough to put an image in their profile picture, or in something explicit they say to somebody through another channel, how many covert cases do you think are going on, on WhatsApp, that you do not know about? Niamh Sweeney: I could not speak to that. We have over 2 billion people using WhatsApp everyday around the world, so obviously there are a lot of ordinary people using it to go about their business. Again, because there are so many encrypted messaging services available, you have to look at the difference between those who work with law enforcement and want to support their efforts and provide the available information and those that do not. There are many who make a virtue of the fact that they do not co-operate with law enforcement and do not provide that information. If I was seeking to share this content, I probably would not choose WhatsApp to do it on because we make these proactive efforts and work so closely with law enforcement.

Q130 Chair: But if somebody shares an awful piece of child abuse with someone else on WhatsApp, and they do not put it in their profile, they do not put it in their name and they do not put it in the title of the group—they just share awful images with a large group of people on

WhatsApp—how do you know it is happening? Niamh Sweeney: Messages are fully end-to-end encrypted, so we would not be able to identify it through one of those deliveries.

Q131 Chair: So you do not know that it is happening. Given the number of cases that you can just identify through their profile pictures, a shocking number of cases must be happening that are end-to-end encrypted and that you do not know anything about. Niamh Sweeney: I can only speak to the ones that we do know about, and we make hundreds of thousands of referrals to NCMEC every year.

Q132 Chair: Let us move back to Ms Bickert. You want to extend this end-to- end encryption to all of your Facebook messaging, your other services and so on. How are you going to be able to identify the Internet Watch Foundation’s images, the ones that everybody has agreed will be taken down, if you are introducing end-to-end encryption? Monika Bickert: As we are looking to move towards end-to-end encryption for our messaging services, which includes Facebook and Instagram direct messaging services, one of the things that we are trying to do is leverage the available unencrypted content that we have from Facebook and Instagram to try to understand who is behind these accounts. That can be a very useful signal for us in stopping this abuse at the outset.

Another thing that we are doing is leveraging metadata and traffic patterns to try to understand—I believe WhatsApp does this as well, and this is something we are learning from them, as well as learning on our own—the patterns that we see to identify when behaviour might be spam behaviour or attempted grooming of a minor to engage in sexual activity. That sort of conduct has a pattern associated with it that we can identify and take action.

What we are doing beyond that, one important component is user education and giving people the tools that they need either to use our reporting functions, so they will be able to report content to us and we can send that on to law enforcement, or to avoid a bad situation in the first place.

Q133 Chair: This just sounds hopelessly naive. When you have people who deliberately look for ways around systems, where we know that people go to great lengths to perpetrate vile crimes against children, the kinds of things that you are talking about sound like the sorts of things that are extremely easy for perpetrators to evade. You are making it even easier for them by introducing end-to-end encryption. Monika Bickert: This does not address all of it, I agree with you. The numbers show that the user messaging and education is useful. We have right now more than a million people a week—or a million instances a week, because it might be some of the same people—where we see

somebody avoiding a potentially bad situation because of the messaging that we are giving them on how to avoid a connection that might be somebody they do not know, that could be somebody trying to spam them or somebody trying to engage in an inappropriate conversation with them.

Q134 Chair: NCA assesses that reporting from Facebook results in more than 2,500 arrests by UK law enforcement and almost 3,000 children safeguarded in the UK. What proportion of that relies on your being able to identify the images? What proportion of that will not take place if you go to end-to-end encryption? Monika Bickert: I cannot answer that. I can say that, in our responding to UK data requests, the vast majority of the requests that we get from UK law enforcement are of the sort that we would still be able to fulfil.

Q135 Chair: We are talking about the other way round. Of the cases that Facebook has reported to the National Centre for Missing and Exploited Children that then get passed on to the NCA, 3,000 children are safeguarded in the UK in a year. How many of those cases disappear if you introduce end-to-end encryption? Monika Bickert: I do not know the answer to that. I would expect the numbers would go down, but I do not think that is simply from the change that we will have in the available signal. As we get more aggressive, it is just prohibiting people from having access to the service in the first place. That should also drive those numbers down. I hear you on the point that the proactive—

Q136 Chair: You are not getting more aggressive in preventing them from accessing the services. This is just doublespeak, isn’t it? You cannot simultaneously say you are going to introduce end-to-end encryption, which means you will not be able to identify hashed images, and say you are being more aggressive at preventing them using your services. You are doing the opposite. Monika Bickert: What I mean by being more aggressive is being more aggressive at the front end. This is similar to what Niamh mentioned when she said that WhatsApp takes down the 250,000 accounts.

Q137 Chair: All the other people that you do not know who they are. The NCMEC estimates that 70% of Facebook’s reporting would be lost. If its estimate is 70%, what is your estimate? Monika Bickert: I do not have an estimate for you. Again I am pointing to the WhatsApp experience here, but Niamh has mentioned about taking that aggressive approach where they are removing even somebody who is suspected. They may not be, but somebody who is suspected of engaging in this behaviour. That is the sort of measure we can put in place to hopefully drive down this activity. But we are also able to leverage what we see on Facebook and Instagram and what we understand about those accounts.

Q138 Chair: Explain it to me in more detail, because I simply do not understand. If you have two people who are sharing awful child abuse images with each other—images that the IWF has identified—on an open platform your systems would pick them up. In the new system, so on WhatsApp currently, if they have not put it in their profile, they have not done anything else, they are just sharing it between them, would you have any way of picking it up? On Facebook’s messaging systems in their new world—in the end-to-end encrypted world—will you have any way of picking that up? That is an image that the IWF has already identified and given you the hash to be able to find. Monika Bickert: You are right. If content is being shared and we do not have access to that content, if it is something we cannot see, it is something that we cannot report.

Q139 Chair: Do you accept then that your reporting of cases of serious abuse against children, the kinds of reporting that currently rescues children from very dangerous situations, will go down under end-to-end encryption? Monika Bickert: I do expect those numbers will go down. I do not think that is only because we cannot see content.

Q140 Chair: Why is Facebook trying to introduce something that will put more children at risk, that will make it harder for law enforcement to rescue vulnerable children? Why are you doing this? Monika Bickert: We are trying, first, to meet the industry standard, which is maximising—

Chair: You are the industry. You can decide what your standard is. Monika Bickert: Secondly, we want to make sure that we are providing an experience that keeps people safe, especially for the crimes that are most at home and most serious to them.

Chair: In replacement of keeping those children safe. Monika Bickert: I know this is something that you are already aware of, but adults surveyed in the UK have said that the online crimes that are most concerning to them are data, data loss and hacking. It is that identity theft crime.

Q141 Chair: What are you most concerned about? Are you more concerned about that than about children? Monika Bickert: I spent my background as a prosecutor working on cases like violent offences to children and human trafficking offences, so of course it is something that is quite serious to me. But I also want to be mindful of all the different types of abuse that we see online. This is a complicated area. I do not think there is a very clear answer on how to keep people the most safe most of the time. This is also something that Governments have struggled with for as long as I have studied it or been aware. Governments have long had to tackle this issue of how you keep

people safe and secure and also ensure privacy, and this is something that we are also struggling with. How do we make sure people are safe from hacking and data theft, identify theft, but also make sure that children are safe online and also make sure that we are living up to privacy obligations and expectations?

Q142 Chair: You have recognised that your system will make children less safe online. The UK is about to introduce online harms legislation. That prioritises the safety of children and the duty of care around children, and is clear that encryption services are not exempt from the approach to protecting children. How do you expect that you are going to be able to fulfil the obligations and the duty of care, particularly around preventing child sexual abuse and exploitation? Monika Bickert: We welcome the proposal and the overall approach of focusing on systems and processes. We are also—and this is important— pleased to see that in the response to the consultation the Government have said that they expect that the duty of care will apply differently to private messaging services versus public services. That is appropriate, given the different expectations that people have about privacy in a messaging service.

On the details of how we will comply, we are in regular communication with the Government and with Ofcom. We have not seen the draft Bill, but I am sure that consultation will continue. We look forward to a way forward.

Q143 Julian Knight: Niamh, have you seen the figures for growth in Signal and Telegram use between 6 January and 10 January, Signal doing 7.5 million installs globally and Telegram use spiked globally by 25 million? Why is that? Niamh Sweeney: I have seen this. It may have something to do with Elon Musk’s tweets about Signal. In relation to our prior conversation, some of the organisations are among those who make a virtue of not co- operating with law enforcement. I would draw that distinction. But I expect it is not only to do with Elon Musk’s tweet but also to do with the confusion that has arisen around the update to WhatsApp’s terms of service and the update to its privacy policy.

Q144 Julian Knight: I am glad you referenced that about Elon Musk, because all the commentators I have spoken to have attributed this growth to concern over the changes in the privacy policy. Why have you done this? Why do UK users have to comply if their data will be unaffected? Niamh Sweeney: We have done it for two reasons. The first is to make clarifications and greater transparency to the terms and the privacy policy. Secondly, it is because we are introducing new business features. That required us to make some changes to both. I would also add that there are no changes arising from this update to our data sharing practices with Facebook anywhere in the world, specifically in the UK.

Q145 Julian Knight: If it does not gain you any benefit, why is this a price worth paying? Niamh Sweeney: It does give us benefit in that it adds transparency that is required and it unlocks some of these new business messaging features. Perhaps I can explain some of the different ways that people use WhatsApp. There is the consumer app that likely you and I use. There is the small business app, which is also available free for download from the app store. But then there is the WhatsApp API, and that is for bigger businesses. If British Airways wants to communicate with customers at scale, perhaps sending them a boarding card, they would use our API, which involves using a third-party service provider like MessengerPeople, Twilio or SendApp. They are the third-party service providers that work with the API. Going forward, Facebook will also be able to provide that third-party service to big businesses who want to message individuals on WhatsApp and avail of that sort of one-to-one direct communication.

No business will ever be given your phone number on WhatsApp. You have to reach out to them in the first place, or they have to have some direct line of contact with you to avail of that. There is nothing being foisted on users. It is just a new way for businesses to be able to use the service.

Q146 Julian Knight: It is a commercial decision. Basically you are going to make money from this? Niamh Sweeney: Yes, the API is one of the ways that WhatsApp makes money.

Q147 Julian Knight: I understand it is one of the ways, but this decision to change your privacy terms is about making money, which must mean surely that you are doing something with the data that you were not doing before. What are you doing with that data to monetise it, and how different is this going to be for users? Niamh Sweeney: We had two blog posts in October that set out the vision for WhatsApp and its revenue streams going forward. Using business messaging is a key part of that. There is a benefit to WhatsApp through greater use of the WhatsApp API by businesses like British Airways.

It does not involve us accessing any additional data; it is about how businesses use the API. They have greater access to your data if you engage with them. We make it very clear in the message anchored at the beginning of every chat with the business that says, “You are messaging a business who are using X service provider to help us do that,” but there is no additional access for us, and there is certainly no—

Q148 Julian Knight: Okay, but you make money basically from selling this greater access to people’s data. You are going to make more money from that because you are selling greater access to information.

Niamh Sweeney: No, genuinely that is not an accurate characterisation of it. We sell the service that allows the businesses to interact with customers at scale, otherwise they would be tied to one phone and one account and messaging one person at a time. This allows them to scale their communication. They will have greater access to business and customer information if the customer chooses to engage with them. But there is no access to this data by Facebook. They will only be acting in some cases where the business that is using the API chooses to do so. They are only acting as a service provider, so as the processor and controller to processor in the traditional data processing or data protection language for that business.

Yes, the WhatsApp API is a big part of the business play for WhatsApp but there is no difference to the way we access, and certainly no difference to the way Facebook will access, any user data arising from it.

Q149 Julian Knight: What will happen to UK users’ WhatsApp data when they travel to the United States? Niamh Sweeney: When we make the transfer of UK users to WhatsApp LLC, there will be no change in data and Facebook, if that is your primary concern. This is all outlined in a document that was posted on the website.

Q150 Julian Knight: What will the changes be on data? Niamh Sweeney: The key one is that we will be able to comply with requests under the CLOUD Act were we in a position where we have not moved or transferred users. This has not happened yet. Currently the service UK users receive is provided by WhatsApp Ireland, but we have announced that we are making the transfer. Once that service is provided by WhatsApp LLC, we will be in a position to respond to requests under the CLOUD Act from UK authorities.

Q151 Julian Knight: If it is so harmless, why do you think so many people have deserted your service in recent weeks over this issue? What is it that they fear? Niamh Sweeney: There has been a lot of confusion and a lot of misreporting about it unfortunately. There have been inaccurate reports about greater data with Facebook that just simply is not the case. That is why we have made the decision to extend the user acceptance period from 8 February to 15 May, to give people more time to understand those changes and to help us through exchanges like this, making clear—

Q152 Julian Knight: Basically another three months’ grace rather than just do it straightaway; if you do not accept it then you are off. You are going to give them three months’ grace? Niamh Sweeney: That is how it works for most services. For us to enter into a contract with the user we must agree terms of service with them and they must accept those terms. That is true across all services that

are provided on the internet. Yes, it is the case that you must accept the terms if you want to continue using the service. But we want to give people more time because our conversation demonstrates how there has been some misunderstanding around it, and that is through misreporting and potentially the handling on our part.

Q153 Julian Knight: Obviously I am desperately sorry about any sort of misunderstanding that has come about. However, it is about trying to find and get to a lot of the truth. Will you absolutely state for this Committee that there will be no greater sharing of data between WhatsApp, Facebook and Instagram as a result of these changes? Niamh Sweeney: I can absolutely say there is no change to the data that WhatsApp shares with Facebook. Facebook and Instagram are the same data controller, so that covers your question arising from this change. No change at all.

Q154 Julian Knight: People will start getting adverts and approaches from businesses on their WhatsApp from May. Is that what is happening? Niamh Sweeney: No, they will never get an approach from a business directly. There is no way for a business to get your number independently. It would be that they choose to message the business on WhatsApp and this will be facilitated. There is absolutely no distribution—

Q155 Julian Knight: Let’s say you choose to message a particular company on WhatsApp. That data is collected and then that company can respond to you at any time. Is that right? Niamh Sweeney: That is correct. The basis is where you are a customer of theirs and—

Q156 Julian Knight: Effectively what you are saying is that anyone who sends a WhatsApp to any company at any time in the future is giving carte blanche for that company to contact them at any moment? Niamh Sweeney: No. The company has obligations, the same rules around direct marketing. It must obtain the user’s consent, but an individual reaching out to a company via private messaging in this manner would indicate that they are—

Q157 Julian Knight: Can you assure the Committee that the system—that interaction between the company and the individual—will be as robust as current systems outside of encrypted messaging services, for example websites? Niamh Sweeney: I am not sure I understand the comparison.

Julian Knight: GDPR is what I am talking about. When you contact a company via WhatsApp, effectively you then say, “Okay, that means you can now contact me.” Will that interface be as robust as it is at present through GDPR for websites on the internet, as opposed to encryption services?

Niamh Sweeney: Yes, every business that is interacting with you is covered by GDPR. I should add that at any moment an individual WhatsApp user can block another WhatsApp user, and that includes a business. You are not obliged to continue your interaction with them. But yes, the business in this instance will become a data controller and would have responsibilities and obligations for that customer and would have to adhere, but the relationship at that moment is with the business.

Q158 Julian Knight: Thanks for that clarity. It is good that I will be able to block more businesses. That is fantastic. Monika, your company has emphasised to my Committee the role of the Correct the Record tool in tackling misinformation. Do you think this sets a kitemark for what is possible when it comes to tackling harm? Do you have plans to develop similar tools for other types of harmful content, such as promoting self-harm or eating disorders? Monika Bickert: We have certainly learned a lot from the developments that we have made so far. I think we are still learning and we will continue to explore how we can use Correct the Record tools in the future. As I think you are aware, one of the things that we have learned is that there are ways that we can reach out to people who have interacted with content that has now been labelled false, or who are trying to share it, that can take them to authoritative content and fundamentally change their experience and their understanding of a topic that is in the public discourse. Yes, we will continue to explore what we can do.

Q159 Julian Knight: Are you going to do it though? Are you going to roll this out for other types of online harm content? You can probably accept—I think most observers would accept—that it has been a positive development. Monika Bickert: Specifically around, say, Covid-19 information, it has been a positive development. I want to caution against extrapolating it to all different areas, because each of these areas is different. As you may be aware, when we first began our misinformation programme, we experimented with different labels and we tried things like “disputed” and then we moved to “this is false.” Over time you have seen these labels evolve and change based on the feedback that we are getting from our systems and from experts about what is working. I would expect that it might vary for different types of misinformation.

Q160 Julian Knight: Why not just put down, “This is potentially dangerous,” for example, on your platform for a discussion about self-harm or the benefits of different types of eating disorder and that sort of content? Would it not be better just to flag that this is potentially harmful? Monika Bickert: That is such a sensitive area, and it is one where we have devoted significant resources to understanding the best ways to keep people safe. It is not necessarily clear-cut. For instance, we have long had policies saying that we will remove anything that promotes self-

harm, but we have also talked to enough safety experts over the years that we understand that if somebody is saying that they are engaging in self-harm, they are admitting it, it could contribute to the stigma they feel and the harm they feel if we simply remove the content. I do not know, but I could imagine if—

Q161 Julian Knight: I am not saying you need to remove the content as such, I was talking about flagging the content. I do take your point very much, that a discussion group on Facebook about self-harm could also be helpful to those who are reading it if they are reading the experience of people who have been through self-harm. You have to be careful when you are doing anything that you do not effectively damage what is quite intense discourse by coming down too hard and taking off content. I do understand that point, but there is a middle ground here, is there not, about the idea of just monitoring and effectively saying, “Watch out. This is content that may be dangerous”? Monika Bickert: There is a middle ground in providing resources, and I want to point out that we do a lot of that now. For instance, if we become aware of potential self-harm, of course we will inform authorities when we see an immediate safety risk, but we also provide resources directly to people who we think are at risk of self-harm. But there is more that we think we can explore there. I cannot say specifically what we will end up doing in the future here, but I can tell you that whatever we do will be informed by very close work with the safety experts who are at the forefront of understanding what is in the best safety interests.

Q162 Stuart C McDonald: Ms Bickert, a couple of questions. First of all, AI has obviously been of increased importance for content moderation, and increasingly so for hateful content, but what are the limits of using AI in moderating hate content? Going forward, where is the balance going to be struck between human moderators and AI moderation? On the same point, are we years away, as some have suggested, from having algorithms that would be sophisticated enough to deal with what we saw in relation to the horrendous events at Christchurch with the relentless reposting of videos? Monika Bickert: First, if history is any guide, when we say what we think AI is capable of doing, we end up being proven wrong at a later date when we see the technology develop. For instance, I would have told you a couple of years ago that I did not think that AI could be useful in helping us remove hate speech because it is simply too contextual. I was clearly wrong. Now the vast majority of the content that we remove for violating our hate speech policies is identified from our technical tools, but there are significant limitations and one is understanding the context. Technology flags most of the hate speech that we remove, and it flags around 95% for us before people report it to us. However, a lot of that still has to be reviewed by people on our teams because the machine cannot necessarily tell if a slur is being used to attack somebody or if somebody is criticising the use of the slur. There are limitations there.

With regard to live footage, live video, I want to specifically address the Christchurch attack, and of course what happened was a horrific event. The role that social media and our services played in people trying to upload that content, especially re-upload that content, is one that we have taken very seriously. At the time, in the 24 hours following the attack, we were able to share versions that we saw being uploaded— there were hundreds of different versions—and we were able to share those with other technology companies, and together we were able to remove that content much better. I think we removed more than a million attempted uploads in the first 24 hours after the attack.

But one of the weaknesses of AI in that situation, which we saw, was the ability of AI to recognise the attack as it was happening. The weakness comes from a lack of the training data that you necessarily need to build machine learning systems. There are ways we can overcome that. For instance, since the Christchurch attack, we partnered with police organisations, including UK police organisations, to obtain footage that they might have of what I can only call first-person violent videos, first- person shooter videos, so that we can build that into our systems so that the systems can get ahead in recognising that. If you extrapolate that across all the different areas, all the different types of violence, this is a process that will take a long time. As we see language and speech and offline trends in abuse, as that all continues to evolve over time, the technology is not going to be perfect. It is always going to have these limitations.

Sometimes people ask me, “Are you going to move to a model where everything that you do is done by machines and you will not need people anymore to remove content?” No. I think for the foreseeable future we are very much going to need this model where the technology is helpful at flagging things, but people are necessary to review a lot of it.

Q163 Stuart C McDonald: There is a sense in which you will always be playing catch-up because things evolve, but specifically on the point about how AI can flag content and then it has to be reviewed, do you have figures about what percentage of the flagged posts end up being restored or on what percentage the human moderation agrees with what the AI formulas have decided? Monika Bickert: We put out some numbers around decisions that we have overturned based on appeal, but those are more about when we have made a final decision. A decision could have been made by automation, but it also could have been made by a person, and then on a second look we reinstate content. We do not have figures on when a technical system has suggested that something violates and then our reviewers have said, “No, it does not,” but one reason that number would be sort of difficult is because we get a value. As we train these technical systems, there is a value to us in having the definition, having what the machine is catching be a little broad because it allows us to label that content and the machine learns not only what violates but also what does

not violate. That process, that labelling function, is something that is very important for our real people who are reviewing content right now. It is a primary job for them to make our systems better.

Q164 Stuart C McDonald: I raised with the other panel an issue about these algorithms. You spoke a lot about how you do not think Facebook is an editor, but there is a sense in which, like other platforms, it is a promoter. It suggests accounts to people. We have raised repeatedly concerns that, on some platforms in particular, some of the content that can be promoted to users can be hateful or harmful or breach various other rules. Would you be happy to see the regulator taking a very hands-on approach to this issue, being able to see the algorithms that Facebook uses and the details around the results of those algorithms so that the regulator could then decide what action might be necessary to try to stop this happening? Monika Bickert: Let me first say first that I think we would request guidance from the regulator on how we should think about these systems and algorithmic controls. We look forward to having those conversations. As I think you know, we have a very good relationship with Ofcom and think it can be the right regulator here. We want to learn from it about what makes sense.

A lot of auditing algorithms or showing algorithms is very system-specific. In other words, it would not even make sense to me and I have worked at the company for nine years. This is code that engineers are trained to understand in the context of our broader systems. Some of that information would not be directly useful and some of it, a lot of it, of course is also proprietary. What I think we can do is make sure that we have the systems in place—and I appreciate that the proposed UK model is very much systems-based—to ensure that people understand what a social media platform’s rules are for when they down-rank things or when they recommend things, and making sure there is a level of transparency and accountability there.

Q165 Stuart C McDonald: A final question for you both. If somebody is banned from using a platform, how easy or difficult is it to prevent that person from signing up again? What can be done to try to improve enforcement of such bans? Monika Bickert: This is a real area of focus for our enforcement teams. The last thing you want is to be wasting resources going after the same accounts again and again, and giving somebody the chance to abuse our services and engage in violation of our policies. It has been an area of investment over the years. Personally, I put out a blog post or two about how we are getting better at identifying recidivist accounts specifically in the areas of dangerous organisations—these are hate organisations, terror organisations—and co-ordinated networks of inauthentic behaviour. These are your financial networks that are trying to post election-related content to draw people to an ad farm, or they could be people trying to interfere for a political reason. Those are the types of accounts where we

have invested effort in identifying their recidivist behaviours, because they are so sophisticated.

There is a broader recidivism problem, which is more focused on the basic attempts to create, say, a user bot to create accounts that might spam. There we take a different approach. We do not have to have as deep a knowledge of the behaviour that the actors are using. It is more about understanding the behaviour involved in creating the accounts. We have technical systems there that can root out that type of recidivism, and we now stop more than 1 million accounts a day at or near the time of upload and remove them from our services.

Niamh Sweeney: It is obviously a little bit different for WhatsApp, because to use the service you need to have a phone number, so once the number is banned it cannot reregister. We invest significant resources in trying to stop recidivist accounts that have been banned from returning. There are some challenges in this part of the world because one of the most effective ways we might try to do that is by comparing contact lists, so you might return with a different number, but if you have the same contacts we would be pretty confident. For good solid reasons there are limitations around data retention, which make absolute sense under GDPR, so we can do that in some parts of the world, but we cannot do it here. There is a slightly different barrier to entry because you need a phone number and we do ban that number from reregistering.

Stuart C McDonald: You say you cannot do that here? Niamh Sweeney: No, we cannot under GDPR because once somebody’s account is deleted, that means we should not be maintaining or retaining any information about their account, so we cannot retain a contacts list for a banned account. That would be one useful way of doing it, but there are good reasons for that and I am not criticising it.

Q166 Stuart C McDonald: It must be pretty impossible to be able to stop banned users coming back on if all they need to do is get another phone and they are back in business again. Niamh Sweeney: There are challenges. Frankly, I am not best placed to go into the details of it. There are ways we can identify through IP addresses and different data points like that, but yes, you are right, it is a challenge. I suppose acquiring a new telephone number is a certain barrier, but you are right. Our engineering teams dedicate quite a lot of time to dealing with it, but it is not something we would ever say we have a 100% record on or anything close to it.

Q167 Stuart C McDonald: Do you have any idea of the numbers of accounts that you have stopped because you believe they are a banned users who are starting up again? Niamh Sweeney: I cannot speak to that number exactly, but I do know that we ban 2 million accounts every month for engaging in behaviour that is prohibited. A lot of that, however, would be around bulk and

automated use of the service. The service is designed for contact with your close personal contacts. For the majority of conversations that happen on WhatsApp, the average group size we see is fewer than 10 people. However, when people try to abuse it we have automated detection that will kick in. Often that happens at the point where the account is set up, before they ever manage to send a message, because there are signals—and often this will be outside of the European Union— that help us to identify this is either a recidivist account or an account that is set up by a bot or there is not a real person at the other end of it. There are 2 million accounts banned every month for engaging in prohibited behaviour, and about three quarters of those are banned automatically without any sort of human intervention.

Stuart C McDonald: If there is anything more that you can send us about what is done to try to stop banned users from getting back on, that would be useful. Thank you. Niamh Sweeney: Absolutely.

Q168 Chair: Yes, it would be really helpful to have some more information on that, thank you. I asked the previous panel if it would be possible to have information on the number of people you currently have working on content moderation, and also the number of people within that who are working directly for your organisations and the number of people who are working for outside agencies or for contracted agencies. That would be really helpful. Finally, is there any other point you would like to make on the forthcoming online harms legislation? We obviously had the White Paper, the response, but we have not yet had the legislation. Is there any particular aspect of the UK’s online harms legislation that you would highlight as being a key one to get right, or where you have a different take and a different view from the Government’s current position? Monika Bickert: I would first say that I am very pleased. I think it is the right direction to focus on, on the systems and processes. I put out a White Paper last year that is just a view, our view, on what some of the different models could look like. From examining what we have seen around the world, I think the first step in regulation has to be ensuring that companies have the foundation that will allow them to be held accountable. Some of that is making sure there is transparency and the appropriate systems and processes that are checked by a regulator. That is very much the right direction.

We have regular dialogue with Ofcom and I know this is something that we will be talking about a lot, but when I think about what the real challenges are that I would expect to face if I were in that position, I think it is very hard to write rules that give enough guidance to be operable, but also give enough flexibility to take into account the sometimes dramatic shifts that we see in the speech landscape. I have been doing the content policy job at Facebook for about eight years and,

when I think about where we were—not just Facebook but where we were as a society—in thinking about what was acceptable speech online and where we are now, it is very different.

When I look at some of the trends, a year and a half ago nobody would have expected what we have seen with Covid-19, for instance, and we have also seen this dramatic shift in the way that we think about terrorism and extremist content with the rise of and the far right. It is going to be very important for the regulator to strike that balance between actual guidance and flexibility.

Niamh Sweeney: I echo some of Monika’s earlier comments, but I will be looking at it more specifically through the lens of private communications. I very much welcome that the Government, in the White Paper and in the comprehensive response that was published late last year, have highlighted a differentiated approach being appropriate and highlighted the importance of privacy. There are only two paragraphs in the Government’s response that spoke to private communications, but obviously a lot of this will play out in the codes of practice that Ofcom has yet to develop. Broadly speaking, I think it is positive to see that proportionality, privacy and a differentiated approach have been mentioned. I think that is all positive.

Chair: Thank you very much. We are hugely grateful for your time this afternoon. I realise that this has been a long session at a time when there have been some other things happening that we will all go and have a chance to catch up on now. Thank you very much for your time and for the evidence session this afternoon.