Open PDF 446KB

Open PDF 446KB

Home Affairs Committee Oral evidence: Online harms, HC 342 Wednesday 20 January 2021 Ordered by the House of Commons to be published on 20 January 2021. Watch the meeting Members present: Yvette Cooper (Chair); Ms Diane Abbott; Dehenna Davison; Ruth Edwards; Laura Farris; Simon Fell; Tim Loughton; Stuart C McDonald. Digital, Culture, Media and Sport Committee member present: Julian Knight. Questions 1 - 168 Witnesses I: Theo Bertram, Director of Government Relations and Public Policy for Europe, TikTok; Nick Pickles, Global Head of Public Policy Strategy and Development, Twitter; Derek Slater, Global Director of Information Policy, Google; and Henry Turnbull, Head of Public Policy UK & Nordics, Snap Inc. II: Monika Bickert, Vice President, Global Policy Management, Facebook; and Niamh Sweeney, Director of Public Policy EMEA, WhatsApp. Examination of Witnesses Witnesses: Theo Bertram, Nick Pickles, Derek Slater and Henry Turnbull. Q1 Chair: Welcome to this Home Affairs Select Committee evidence session on online harms. We have with us today Theo Bertram from TikTok, Nick Pickles from Twitter, Derek Slater from Google and covering YouTube, and Henry Turnbull from Snap Inc, covering Snapchat. Thank you very much to our witnesses for joining us today. We are very grateful for your time. The inauguration of President Biden and Vice President Harris is now under way, and there are 25,000 national guard guarding the Capitol and the ceremony. We are very clear that there could be more violence today, and we all have in our minds those terrible scenes of a violent mob storming the Capitol and assaulting an open democracy. Four years ago when this Committee first took evidence on online harms, we raised the issues around people escalating hatred or organising violence on social media, but we would not have imagined that we would ever see the scenes we saw this month. Given some of the anger and extremism that we have seen, how far do you feel that each of your platforms have made this possible? Derek Slater: Thank you, Chair, both for the question and for the opportunity to come before you today to continue this conversation about online harms. Certainly what happened this month was a terrible event, and we take very seriously our responsibility both with respect to that situation and, more broadly, with respect to providing high-quality, relevant information and addressing user safety, online harms and low-quality information. Over the years we have deployed multiple different levers to address that challenge, raising up authoritative, quality information, rewarding creators who create that sort of information, while removing illegal content or content that violates our content policies; in other words, reducing or not recommending content that is borderline. Q2 Chair: Sure, I know that you have policies. I am just asking you to reflect. How far do you think that your platform has enabled the kind of extremism that we have seen? Not what are your processes but, realistically, how far do you think your platforms have enabled this? Derek Slater: I approach that with pride and humility in the sense that I think we have continued to improve over time, but there is always more to do. There are always new challenges, new bad actors, new threat vectors. I look at the improvements we made a couple of years ago on our hate speech policy and the way that dealt with— Q3 Chair: Sure, and those are the things that you have done. I am asking you to have a bit of humility and tell us how much you think some of the groups that organised the assault on the Capitol were communicating, were using YouTube to radicalise each other, to publish their videos and so on. How far do you think that was happening? Derek Slater: We took action against groups that were violating our policies. Certainly, we are looking closely at what we need to do in the future to make sure we continue to pursue it. To give one example, late last year we improved our policies around harmful conspiracy theories connected to real-world harm and aimed at individuals or key groups, such as QAnon— Q4 Chair: Sure, but you are still telling me the measures that you are taking. I am interested, first, in how bad the problem has been. How much of this has been on your channels? Derek Slater: We have continued to make progress in removing and reducing this sort of harmful behaviour— Q5 Chair: You are still not answering my question. I am not asking you about the measures that you have taken. I am asking you about how much of this has been happening on your channel. Derek Slater: Chair, I am trying to answer the question. I understand. When we saw content that was violative, we did take action. I think we always have to continue to evaluate, and we are still taking stock of what happened and what we can do better, so I am happy to follow up and to continue as we evaluate the situation at hand. Q6 Chair: You basically think you have done everything you could and you have taken it all down? Derek Slater: No. I think we have to continue to improve over time. We get up every day thinking about, “How can we do better?” We are not resting on our laurels in any way. We will continue to evaluate how we did, how we are doing and what we need to do going forward. Q7 Chair: Have you removed white supremacist material from YouTube? Derek Slater: Yes, we remove material that expresses superiority based on race, religion or other characteristics of that sort. We do remove it from YouTube. Q8 Chair: How come I could find it just 10 minutes ago? How come I can find videos from Red Ice TV? I think you have banned the channel, but I could still find their material and their videos promoting white supremacist theories. They were being posted by other people just 10 minutes ago. Derek Slater: I am not familiar with the exact example you are talking about, but we use a combination of automated mechanisms to identify violative content. Those systems are getting better and better over time at identifying that content and ensuring it is removed before it is viewed at all, or certainly before it is widely viewed. I think last quarter 80% of the violative videos were removed before 10 views. We also rely on and are grateful for co-operation from users and trusted flaggers like the CTIRU, and for references from folks like yourself, to make sure that we are reactively responding where automated systems are not able to detect those things. I would be happy if you sent along that information. We will be able to take a further look at it. Q9 Chair: Do you think that none of the people who were storming the Capitol would have been radicalised on YouTube? Derek Slater: You know what, I think we have to look very carefully at this situation and make sure that we are continuing to address that challenge. The challenge of making sure we are not only removing violative content, but reducing and not recommending content that may not cross that line but borders it, is something that we need to continue to invest heavily in. Q10 Chair: Let me turn to Twitter. Mr Pickles, welcome. Can you reflect for me how much you think this kind of problem has been enabled by Twitter or the fact of Twitter’s existence over the last few years? Nick Pickles: Thank you, Chair, for the opportunity to appear and discuss this issue. Certainly, my colleagues and I were shocked watching the events unfold in the Capitol, and I think it is impossible for anyone who works in the technology sector to look at those events and not ask, “Did we play a part in this?” I think the answer has to be yes. The link between offline harm and online content is demonstrably real. Four years ago, you may have heard a different answer to that question. Looking at it, did we move fast enough on some of the conspiracy theories? One of the challenges is that there has not been an obvious tipping point where you would say, “This was the point where we should have taken action.” Last year we looked at QAnon, for example, and de- amplified it, made it harder to find and did not recommend it, but we allowed the speech to continue. This year we changed our approach and aggressively removed 70,000 accounts related to that. Now, if we were to reflect on our actions, should we have taken more aggressive enforcement action earlier? I think we have to say yes. The challenge right now is to look at our services and say, are the policies that we have now the ones we had in 2016? They are not. We have strengthened them significantly. Have we enforced them rigorously and consistently enough? Again, I think we have to say we have more work to do to enforce our policies consistently. Ultimately, are we willing to take the hard decisions when needed? Looking at our actions around the Capitol riots, ultimately suspending the personal account of the President of the United States was an unprecedented moment, but it was also a reflection of our service’s role in offline events. We saw how those tweets were being interpreted, and we took the decision to remove that account. We definitely have more to do and more to learn, but I also think we have made progress since 2016.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    60 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us