American Enterprise Institute

Web event — What lies ahead for Section 230?

Opening remarks: Mark Jamison, Visiting Scholar, AEI

12:35 PM Panel discussion

Panelists: Daniel Lyons, Visiting Fellow, AEI Matt Perault, Director, Center on Science & Technology Policy, Duke University Kate Tummarello, Executive Director, Engine

Moderator: Mark Jamison, Visiting Scholar, AEI

Wednesday, May 26, 2021 12:30–1:30 p.m.

Event page: https://www.aei.org/events/what-lies-ahead-for-section-230/

Mark Jamison: Welcome, everyone, to our American Enterprise Institute panel on “What lies ahead for Section 230?” I’m Mark Jamison, a visiting scholar at AEI. I’m also director of the Digital Markets Initiative at the University of Florida and the university’s Public Utility Research Center. Section 230 is part of the Communications Decency Act that provides that — well, what some say are very broad protections for interactive computer services in their work of hosting content that other people produce and moderating that content. We’re talking about well-known platforms such as and , but also a host of large — excuse me — a large host of smaller tech platforms, as well as some companies that we don’t think of as tech companies like , where it hosts content and content of other people.

I’m joined today by three distinguished panelists. Daniel Lyons is a visiting fellow here at AEI, where he focuses on the legal aspects of FCC policy, the Supreme Court, online speech, and antitrust. He’s written extensively about Section 230 over the past year, particularly about calls to reform Section 230 from those who believe that political speech is being unfairly censored online. Daniel is a professor at Boston College School, where he teaches telecommunications, administrative, and cyber law — among other topics. He has a JD from Harvard Law School and an AB from Harvard College.

Matt Perault directs the Center on Science & Technology Policy at Duke University, and is an associate — excuse me — an associate professor of the practice at Duke’s Sanford School of Public Policy. Matt previously served as director of public policy at Facebook, where he led the company’s global public policy planning efforts on issues such as competition, law enforcement, and human rights. He also oversaw public policy for WhatsApp, Oculus, and Facebook’s artificial intelligence research. Matt holds a law degree from Harvard Law School, a master’s degree in public policy from Duke, and a bachelor’s in public — excuse me — political science from Brown.

Kate Tummarello is the executive director of Engine — a policy, advocacy, and research organization that helps startups become meaningful contributors to the economy. Before serving as executive director, Kate was a policy director and analyst for Engine. She also worked as a surveillance reform policy analyst for the Electronic Frontier Foundation and as a tech reporter for several different outlets including and . Kate holds a bachelor’s degree in public policy from Hamilton College. Daniel, Matt, and Kate, welcome.

Daniel, let’s begin with you. Let’s go back to 1995. People have been suing interactive computer services — we called them, largely, computer bulletin boards back then; now we call them platforms — and the courts were tending to decide that these platforms could not be held liable for what people posted on them as long as the platform did not take any hand in trying to moderate that content. The exception, as I said, to that liability exception was if they took some role. So the platform was free of liability if it allowed its service to just be kind of a lawless wild, wild west, but could be held liable if it tried to impose any kind of quality control.

So then enter Sen. Chris Cox. He believed the situation was untenable. He thought that the way things were going was stifling innovation, stifling some real great business opportunities, and he found an ally — then US Rep. , who is now Sen. Wyden — and they wrote together what we know as Section 230. And it was passed into law in 1996. So just coming up to the present now, tell us: What is this Section 230, and what does it do?

Daniel Lyons: Yeah, that’s exactly right. Section 230 provides the legal framework under which much of America’s internet ecosystem currently operates. So it’s comprised of two parts — one of which, I think, is more important than the other. The first is 230(c)(1), which says that no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. So you might call this “hosting immunity.” And in layman’s terms, this means that companies that host user-generated content are not liable for the material that their users host. Professor Jeff Kosseff called this the “The Twenty-Six Words That Created the Internet.” And I think he’s quite right about that. Twitter lets me post precisely because it doesn’t have to worry about being sued if I say something defamatory — which, in Twitter’s case, is really valuable given some of the content there. This hosting immunity is especially powerful given the broad interpretation that courts have given to the scope of this provision.

The second piece is Section 230(c)(2), which says that no provider or user of interactive computer service shall be held liable on account of any action taken voluntarily in good faith to restrict access to material that the provider considers to be obscene or lewd or lascivious or otherwise unobjectionable. And you might call this “takedown immunity.” So this would allow Twitter, for example, to remove objectionable content if it chooses to do so without facing legal action by its users. And that encourages companies to “police their own yards,” so to speak, by giving them a free hand to decide what they will and what they won’t allow on their sites.

So we often think of Section 230 as benefiting — particularly the big giants like Facebook and Twitter. But it’s important to note that it covers any interactive computer service, which essentially means any company that has multiple users online. So yes, it protects Facebook, but it also protects the local newspaper’s ability to host a comment section on its website, and ’s no-frills market connecting buyers and sellers of used junk, right? It sweeps really broadly.

And then one final point: There are a few exceptions to Section 230. It doesn’t apply, for example, to material that violates federal criminal law like , which companies have a duty to identify and remove. It was amended more recently to exclude certain claims related to sex trafficking as well. And Section 230 doesn’t apply to claims; those are governed by a different section of the law.

Mark Jamison: All right, well, thank you very much.

Now, Kate, you’ve spent a lot of time covering the tech sector. You’ve spent a lot of time, now, also discussing with these companies that are providing these kinds of services. How important has Section 230 been to them and the development of their businesses?

Kate Tummarello: Yeah, Section 230 is critical — not just to launch a company that relies on user-generated content, but even to attract investment to be able to launch the company. No investor wants to put a couple million dollars into a company if they know those million dollars could be funneled right to legal fees from lawsuits arising from user-generated content. And I think that’s a common misconception — and we’ll get to the First Amendment stuff later on — but this idea that, you know, well, this stuff is protected legally, or, you know, maybe the lawsuit is ridiculous and you couldn’t win, even if somebody wanted to fight it out in court. But for small companies especially, even the threat of litigation is really expensive and can be ruinous. We’ve done some research on the cost of litigation and with Section 230 in place.

So currently, under current law, it can cost tens of thousands of dollars just to get a lawsuit over user-generated content dismissed. If you remove Section 230, or if your case proceeds despite Section 230, you’re looking at hundreds of thousands of dollars. The average startup, you know, does not have hundreds of thousands of dollars — especially not to spend on lawyers’ fees. They’re operating on, you know, shoestring budgets, and they’re trying to make the most innovative and exciting stuff, and they don’t tend to have huge staffs or huge bank accounts.

So Section 230 is, you know, what keeps startups able to host user-generated content and to prioritize the needs of their users, right? The whole internet is not Facebook. Not everything on the internet is a Facebook post or a tweet or a YouTube video. Daniel was talking about comment sections and e-commerce sites, but there’s also things like messaging services, photo hosting services. Consumer reviews are really useful to the average consumer. And all that stuff is protected by Section 230.

And we have this really rich ecosystem of companies offering different ways to communicate with other people. And that all depends on Section 230. And yeah, while Facebook may be able to fend off lawsuits from user-generated content, almost no one else can. And I think that often gets lost in the weeds in these conversations.

Mark Jamison: So some of these companies that you’ve spent a lot of time talking with, you’ve heard them talk about their pressures from customers and customers’, perhaps, dissatisfaction or their happiness, and them being hauled before Congress, if you will, to explain what they’re doing and why they do things the way they do. Why is it that Section 230 is controversial? If it’s created this wonderful ecosystem, everyone should be happy.

Kate Tummarello: I wish. Yeah, it is very controversial. I think at the underlying level, speech is really controversial — especially in a country where we protect a lot of speech. And we protect a lot of speech that other people don’t like; it is legally protected, and it’s really hard to put in a box. You know, what one person thinks is acceptable speech or should be legally protected speech, another person thinks is incredibly offensive and probably should be illegal.

And even when we’re talking about really specific categories of speech like child sexual abuse material or, you know, sex trafficking material, even that stuff, you know, while the worst of it is clear — and of course, companies want to step in and do the right thing — you know, sometimes we hear stories about filtering technologies catching a stay-at-home mom blogger posting a picture of her toddler in a bathing suit, right? That’s not child sexual abuse material, but it can look like it to the untrained — especially the untrained computer eye. So speech is just really hard to get right, and it’s really hard to get consensus on. And I think you even see lawmakers disagree with each other, right? You see lawmakers in the same hearing say, “You should have taken this vaccine-related content down” or “why did you take this vaccine-related content down?” And you can sub almost anything in for “vaccine” in that sentence. So it’s really messy, and we protect a lot of speech in this country. And that means that we protect the ability to host that speech. So that’s definitely one part of it.

But the other thing that I think is really kind of an important piece of context is that tech is kind of an easy punching bag right now. People are unhappy with the Big Tech companies and that’s trickling down. And something Engine is always really conscious of is: If you make policies for some of the, like, bigger mistakes from the biggest players, you will end up making it so that no one else can operate in this space. And so we’re, you know, always concerned that the controversy around a handful of decisions from a very small number of companies will end up changing the ecosystem overall.

Mark Jamison: All right. So, Matt, Kate just described why it is that the politicians and a lot of other people that call their politicians or representatives get upset about this. What’s Congress been talking about? What are some of the ideas that are out there, and which ones seem to maybe get some traction?

Matt Perault: Thanks, Mark, and thanks so much for having me, for including me in this event. It’s really an honor to be with you guys today. I think Kate’s last point is exactly the right one. A lot of congressional reforms have been focused on trying to take a bite out of a small group of companies: Twitter, Facebook, , YouTube, maybe to a lesser extent Apple and Amazon. But I think the costs of 230 reform are likely to be borne by smaller companies. And that’s because of a point that I think Daniel and Kate both made, which is: Section 230 is not just about immunity from liability; it’s also about immunity from suit. And so it enables platforms to kick cases out of court at the motion to dismiss phase.

As Kate said, that’s still an expensive proposition. Even that, even those litigation fees are expensive. But in Section 230 reform, even in cases where platforms were eventually going to prevail, they might have to expend enormous litigation expenses — particularly through the discovery process that would make the litigation costs so burdensome that they would likely have to change their products in significant ways that I think would probably result in a much more constrained space for user expression.

Congress has talked about a couple of different reforms. We detailed these as part of a legislative tracking project that we published on Slate in partnership with American University and Future Tense. And we looked at all the reforms that have been introduced in the last Congress and the current one. And there are really, I think, four types of reform. There’s either outright repeal, pulling Section 230 entirely, which, I should say, there has been bipartisan support for. President Trump and candidate Biden both supported full repeal of Section 230.

Carve-outs: So as Kate described and Daniel described, for material like sex trafficking, for instance, or child sexual exploitation material, carving that out from Section 230 protections. Some others have proposed carving out terrorism or civil rights crimes, for instance (or civil rights violations, I should say). Another category of reform are quid pro quo reforms, so where a platform actually has to earn the liability protections of Section 230. Those would be reforms like the PACT Act or the EARN IT Act or also Facebook’s recent proposal where they proposed that platforms would have to earn Section 230 protections by instituting certain operational best practices.

And then the final category of reforms are the Good Samaritan reforms, which target Section(c)(2) which Daniel described, and are really focused on trying to ensure that platforms are more “neutral” in how they police content on their services.

Mark Jamison: All right, so, Daniel, you’ve written specifically about some of the people who are concerned about the suppression of political speech. And as I recall, you were thinking that maybe Section 230 isn’t the right way to deal with that, but maybe I’m wrong. Could you just explain what your thoughts are on that, please?

Daniel Lyons: Yeah, so, much of the backlash on the right is coalescing around this idea that maybe social media platforms owe some kind of common carrier–like obligation to their users — some sense of an equal carriage model. It’s an interesting thought experiment, and I think it gets to the concern by some of those on the right about the power that social media companies have to control the flow of information in society. Our general rule has said businesses are free to contract with or refuse service to anyone subject to background norms of nondiscrimination . But the law has long recognized an exception for certain industries that we call “afflicted by the public interest.”

Now, that area of the law is notoriously vague and kind of circular. But basically, the touchstones are: one, your business has some unique public significance, right? Something like transportation, or lodging, or communications; and two, you hold yourself out to serve the public generally. So there are a number of commentators who are suggesting that maybe we should be putting social media platforms into this category. Most notable among these are probably Professor Richard Epstein, and Justice Clarence Thomas has floated this idea recently.

Now, there are a number of parallels, I think, between social media platforms and traditional common carriers, right? So for example, like railroads and telephone companies, social media companies carry a good: in this case, information among consumers. And they have significant market power — which is often a hallmark of a common carrier and can shape the way public opinion goes. Although there are some questions about, I think, how you define the market in that circumstance. And what they do — dissemination of information — is very important and, in fact, important to the public interest.

Now, that having been said, I personally think the common carriage model doesn’t quite fit for three reasons: One, common carriers generally hold themselves out to serve indiscriminately. And that’s not really what social media platforms do. They’re not, I don’t think, neutral conduits of information like phone companies. Instead, they sort content, they promote some and downplay others, they add commentary like fact-checks. But they feel more like newspapers where editorial judgment is part of the business model that they’re engaged in. And so it’s for this reason, I think, that we never treated newspapers as common carriers even though they serve an important function: the function of informing the populace. Second is the implications of common carriage, right? If we required social media companies to post on a common carriage basis, the result could be a cesspool similar to what we see on the more seedy corners of the web like . If common carriage require treating everyone equally, that could require platforms to carry content like offensive speech or pornography or other material that Professor Eric Goldman calls “awful but lawful” speech. That not only reduces differentiation among platforms, but it also, I think, degrades the experience for most users.

And then finally, I think there are free speech concerns. And that’s important because unlike railroads, these platforms aren’t carrying lumber or cotton, right? They’re carrying speech. And so that has First Amendment concerns, which, I think, we’ll get into in a little while.

Another framework that might work — which I mentioned earlier — is nondiscrimination law. So we have a standard within either federal or state law saying that businesses are not allowed to discriminate on the basis of certain protected classes, right? You can’t discriminate on the basis of race or sexual orientation, gender, religion — things like that. I think at least some of the complaints on the right sound a lot like a claim for — in addition to existing nondiscrimination law — to protect against discrimination on the basis of political identity or political orientation. That might get some distance toward what those who were calling for common carriage might be getting at, which is a concern that these platforms are disparately silencing right-of-center viewpoints and not left-of-center viewpoints. But those, I think, raise additional complications as well. I’ll stop there.

Mark Jamison: OK. I want this to come back to a couple of things that you were just talking about. One is what people in their political speech feel — especially in the compressed time as you’re approaching an election and suddenly you can’t communicate on particular topics, you know, what that’s just like for them and why that might matter.

But then also, the nondiscrimination and how that may or may not work. But before we do that, let’s get back to the experiences of people. And, Kate, I want to ask you: So the platforms themselves find themselves trying to balance creatives, expectations, and users’ expectations. Daniel just talked through a lot of controversies and conflicting opinions. What is it like for the platforms as they try to navigate those kinds of waters? What’s it like for them?

Kate Tummarello: Yeah, I think it’s a lose-lose game, but one they play anyway, right? And that’s, like, kind of commendable that they’re willing to just always take a beating. I mean — because we’re citing Eric Goldman, he has made the point several times — it’s a zero-sum game, right? Like, if you’re making one person happy by taking something down, you are inevitably making the person who said that, whatever it was, unhappy with the takedown.

So I think it’s impossible to win, and for startups, especially, right? Like everyone — you know — you might complain about Facebook, but Facebook isn’t losing enough users in numbers that threaten its bottom line. But for a startup that only has a couple thousand users, if they upset their users and they leave in mass, that could be the end of the company. So there is a really deep commitment for a lot of the platforms in keeping users happy and safe and, you know, creating a place where people want to be. So there is a tension there. And I think the assumption is that 230 is what lets companies take content down. And certainly, legally that is true, or legally it helps protect them when they take stuff down. They could take it down without 230, but I get the point. But more importantly, 230 is what lets them host it in the first place without fear. And so yes, of course, I’m sure it’s very frustrating as a creator to have something taken down, especially if it’s something you’ve monetized, right? If that’s how you make money and YouTube or someone takes it down, that can be devastating, and I don’t want to downplay that.

But the reason that we have a place where people can go and create content and make money off of it — and not have to wait for a TV network to pick up a TV show, or a radio station to give someone a radio show, or a magazine to give someone a column — we have this world of creativity online that is largely enabled by Section 230. And I think that piece often gets lost. We act like only the takedowns matter. But the fact that we even have a place where stuff can get posted to then get taken down is kind of incredible, and something that, you know, is not the way the world worked 15 years ago. That’s something that’s new because of Section 230, and something we should appreciate more. So I think the frustrations are real. And again, I don’t want to dismiss them. But they do kind of miss the first really important piece that there is a place to host content to begin with.

And then I think a lot of the conversations around transparency are really important. A lot of the frustrations that we hear about tend to be, “My stuff got taken down and I don’t know why. No one can point to the specific policy I violated.” And again, that gets back to what I said earlier: Speech is really messy. It’s really hard to write an acceptable use policy that predicts the future and encompasses everything and can be updated every time a new slang word gets a different meaning or, you know, suddenly an emoji means something it didn’t a week ago. So it’s definitely not — you know, we can’t be in search of the perfect solution here. But I think probably more transparency and clearer communication is a step in the right direction.

Mark Jamison: So there are two things in there that I would like for you to elaborate on. One is something that Justice Thomas raised, and that is, he said, “Sometimes it’s not just that the content is removed, but the platform actually offers some commentary on it.” They may label it as something. Or they may say, “Here, if you’re reading, you really ought to read this thing over here as well.” How important is that to the platforms in conducting their business if that becomes a legal problem for them? How important is it?

Kate Tummarello: Well, I imagine that a lot of that isn’t in response to other users, right? Like, if you post a really inflammatory video that presents half-truths or mistruths and the platform is getting a lot of complaints, they might want to direct users to something more credible. Their direction isn’t the same as kind of the hosting where there was an action in saying, “No, look at this instead.” And, certainly, the fact-check boxes that will pop up, that’s the platform’s own content; that’s not protected by 230.

So I think there’s a reason they’re doing it and it’s largely in response to user complaints. Again, no company — there’s this weird misconception that companies want to create a terrible space online. It’s no good to do that. They certainly get dragged through the mud when they do that. And you drive away advertisers, right? You’re not going to see a Crest toothpaste ad next to something terrible online for very long. So I think the platforms have a real interest in making sure they’re creating a space, like I said, where people want to be. And that includes balancing out some of the harmful content with some better content. So I think there’s a real incentive to do that, and users benefit from it.

Mark Jamison: All right, well, thank you. And I’ll get back to some of the other points a little bit later.

So, Matt, let me bring you back into the conversation. Some people from a legal perspective have argued that Section 230 really isn’t necessary because you have the First Amendment, which protects the platforms’ rights to manage everything on their platform. It’s a freedom- of-speech type issue. And the common law would have worked all this out eventually. Just 230 as kind of an efficient legal instrument — what are your thoughts on that?

Matt Perault: So first, I want to pick up on a point that Kate just raised that I think is really important and is an underdeveloped part of Section 230, which is the line between content creation and content hosting. So the statute is really clear with content creation: A platform can still be liable if it’s a creator of content. So Facebook, when it engages in Facebook Watch, for instance, where it commissions content and then broadcasts it, it’s liable as a publisher for that because it’s serving as a content creator.

Similarly, Mark, as you described in your opening, a traditional publisher can benefit from Section 230 protections when it hosts a comment section on its websites: The New York Times’ cooking community, for instance, all of those wonderful comments about substitutions that you can make in your recipes. The New York Times is protected by Section 230 when it hosts those comments.

So platforms, when they add context in their own name, when they say something explicitly that they are writing as the platform and publishing, they are liable for that. And if that kind of platform speech were something that someone wanted to target, a platform couldn’t use Section 230 as a defense in those cases.

On the First Amendment, I think you’re exactly right that there’s a lot of confusion about what Section 230 protects and what the First Amendment protects. People all the time say things like, “Platforms can host whatever speech they want and the government can’t tell them what speech to host on their platforms because of Section 230.” And the platform’s rights as a speaker derive primarily from the First Amendment, as you said, not primarily from Section 230.

I think Section 230 does do additional work in a couple of different ways. One is because of the situation that you described and Daniel described in the mid-1990s, where platforms were becoming held liable as speakers for content that they hosted. Section 230 makes clear that platforms shouldn’t be liable in those circumstances.

And then as Kate was alluding to earlier, it’s a routing mechanism in litigation to get cases kicked out of court relatively early. So even if a platform were going to win a lawsuit later on in the litigation process, if Section 230 weren’t available, it would have to likely go deeper into the litigation process, incur more expense, and then as Kate has described, that would be a significant expense for small companies. It might be an expense that companies like Google and Facebook can bear; it’s not an expense that smaller platforms could bear. Mark Jamison: All right, thank you for that.

Kate, I want to go back to you just for a moment. One of the things that I’ve seen recently from Facebook is that they talk about how they’ve worked with people to develop industry standards. But they think that that doesn’t really have the teeth that it needs, so they’d like some laws. Changing Section 230 and changing some other things as well, generally — yeah, I’m an economist with a long regulatory background. I think if the industry has common interests, typically, industry self-regulation can be workable. What are your thoughts? So to what extent can the industry develop standards and practices and let people know this platform follows them well and this one does not, versus asking the government to step in? How does that work from your perspective?

Kate Tummarello: Yeah, there are a lot of industry-led efforts out there to kind of bring some best practices — not just around how to moderate content, but also how to talk about how you moderate content. Because I think that’s a big piece, which is that when people don’t understand what goes into content moderation and then they’re frustrated and disappointed when it doesn’t go their way. And I think those are all, you know, worth looking into and kind of worth holding up as something that we should be trying to do.

But I do think that it misses kind of the bigger point that you don’t really want content moderation to look the same across the whole internet, right? We always talk about the example: There’s a subreddit, I think it’s called “cats on their hind legs.” And all it is is pictures of cats standing on their hind legs. And if you post a picture of a dog standing on its hind legs, you are violating the acceptable use policy of that subreddit and your content will be taken down. It doesn’t make sense for Facebook to have that policy. It doesn’t make sense for Wikipedia to have, probably, any cats on their hind legs at all.

And you want an ecosystem where every experience is different. We don’t want the whole internet to look like Facebook where, you know, you can kind of go post almost anything you want. There are corners of the internet that are relevant to specific people and communities. You know, you might not have found, in your hometown, 40 people willing to swap pictures of cats on their hind legs, but because of the internet, you can find 400 — probably 400,000 — people willing to swap those pictures.

And, you know, making that community have to look like Facebook doesn’t make sense. And so I think as long as those industry best practices recognize the need for different approaches to content moderation, that’s worthwhile. But when you come in with kind of a law that acts like everyone should be treated the same — and so I think Matt mentioned the PACT Act earlier. And looking at things like transparency reporting, a takedown on Wikipedia is very different than a takedown on Facebook because on Wikipedia, it’s the community doing the takedown.

So when we talk about how to make it work for the whole internet, we need to be really nuanced and really specific because again, the internet is not a monolith, thankfully. I don’t think we want that. And certainly, we want to make sure the industry isn’t converging onto one model that works for one or two companies. And if I could just add something, sorry, about the point before about creators —

Mark Jamison: Sure. Kate Tummarello: I think a really good example — because Daniel brought up the last big amendment to Section 230— was the sex trafficking law: SESTA-FOSTA. And speaking about creators, we’ve talked to companies who’ve said, “In the wake of SESTA-FOSTA, we’ve taken down content that isn’t sex trafficking. It’s certainly not the act of sex trafficking or enabling sex trafficking, but it might be sex trafficking adjacent.” And so an example was a person who did a podcast providing resources for people who were looking to safely leave the sex trafficking world. And it got really hard to host that content because for the companies, they don’t want to get sued over it, right? Section 230 lets platforms prioritize the needs of their users (including creatives) over legal risk. And when you start to tinker with it, you change that calculation and suddenly you’ll see more takedowns, not less.

Mark Jamison: So one of the things you mentioned earlier, I’d like you to elaborate on it a little bit again if you would please. In the complexity of moderating content, it’s hard for a platform to always get it right. Some apparently rely upon artificial intelligence, which I tell people is neither artificial nor is it intelligent. It makes lots of mistakes. How does that work? With the mistakes that are being made, why do they happen, and should there be something illegal about the mistakes?

Kate Tummarello: I mean, I think you want to incentivize companies doing proactive steps. But it’s definitely dangerous to rely too much on those. Sorry, I don’t know if anyone else can hear the echo, or if it’s just me. OK.

Mark Jamison: No, you’re fine.

Kate Tummarello: OK. Sorry about that. And you know, Google has built Content ID and they’ve spent $100 million on that — and that figure is a little bit old now — to catch copyright infringing content. has testified that Facebook is hiring tens of thousands of content moderators. So these are already tools for kind of the biggest of the big companies. This is not something the average company can invest in or can invest in right away.

And so I think, you know, you want companies to be doing their best to take down bad content. But you have to recognize it’s going to take down the wrong stuff, it’s going to miss the right stuff. Like I said, speech is messy, and it’s really hard to get either a computer or a group of 10,000 people to identify it correctly.

Matt Perault: I think I’m the only one so far who hasn’t quoted Eric Goldman so I’ll do it here. And I think that he said, “The solution to a lot of the problems that people have about Section 230 is actually Section 230.” And the reason for that, I think, is the reason that Kate is describing, which is: Section 230 permits an incredible diversity of different types of sites offering very different value propositions. The kind of common-carrier style regulation that Daniel is discussing would actually narrow the field dramatically and make more platforms kind of all look like each other.

I think the solution to the concern, Mark, that you’re raising is more competition: so competition around error rates and content moderation. I know we could have a whole separate conversation around whether there’s sufficient competition in the tech sector. But I do think that’s the route to addressing some of these concerns. You would want platforms to compete on the issue of: How high quality are their content moderation practices, and are they making a lot of mistakes or not? Humans make a lot of mistakes; robots make a lot of mistakes. I’m not sure AI is necessarily at this point, that we know if it’s better or worse. But you would want that to be a thing that companies would compete on: Who offers the best AI or who offers the best human moderation system?

Right now, for instance, Reddit is heavily dependent, I think, on community moderators. Facebook is heavily dependent on algorithmic moderators. Which one is the better mechanism? And could Reddit improve its community moderation system over time such that some people choose to go to Reddit when they want certain experiences over Facebook because they prefer the quality of the content moderation services?

Mark Jamison: So you’ve raised the issue of competition; I will come back to that because standard economics is that if you start regulating an industry, the big firms win, the small ones lose. I’d just like to know if there’s an exception here.

But you also raised, Matt, the nondiscrimination the common carrier would have. And that’s something I wanted to come back to Daniel on. Common carriers were required to be nondiscriminatory because of the situations they were in. It goes back many centuries. But we have it, you know, more recently in the US over the past 150 years, it came in because the services were so essential and the customers were not in a position to protect themselves, so these nondiscrimination requirements were imposed. If a nondiscrimination were imposed on the tech sector as we know it today, what would that look like, do you think? And what do you think the impacts would be?

Daniel Lyons: Yeah, it’s an interesting question. I mean historically, the big thing we were concerned about is discriminatory rates: that the railroad would charge favored shippers one rate and un-favored folks another. And that’s really bizarre to think about in the world of social media where they’re basically giving away the product for free, right?

To the extent that common carriage would apply, the idea would be something like breathing a lot of life into the good faith clause in Section 230 (c)(2): the idea that you’re not going to be taking down user content except in accordance with clear principles that are fairly applied to users across the spectrum. And in particular, across the political spectrum, I think, is where a lot of the heat and light is on this issue.

Does it work? Does it not? I’m not entirely sure because a lot of the assumption that goes into the common carrier model is that it’s relatively easy to identify what is an unfair or discriminatory practice, right? And when you’re in the business of editorial judgment, it’s really hard to separate “I’m discriminating against you because you’re conservative” from “I’m discriminating against you because the thing you said doesn’t seem to be right.” There’s a message identity confusion that’s going to be really hard to disaggregate.

The conversation we were having earlier about error rates assumes that there was, you know, an easy definition of what is correct and what’s not, and the fact that that’s not always the case is going to be an issue. John Tierney at City Journal just posted a great article; he had written a piece about a high school track athlete who had passed out after running with a mask on. And the whole article was about the downside to masking for high school students, citing a peer-reviewed German study about negative health effects. And immediately got that slapped with a Facebook misinformation label because the fact checkers that Facebook was contracting with for those particular claims were very pro-masking and seemed to downplay this German study despite the fact that it was peer reviewed and, in the opinion of the article’s author, were bringing their own biases to the table.

It’s very hard to disaggregate that, and that’s one of the many challenges, I think, are going to happen when you try to apply that type of a model to a company whose editorial judgements are a part of their business.

Mark Jamison: So some people, Daniel, react to the discrimination that happens because of the compressed time in which they have to find an alternative and to react. And I think that’s especially true in the case of an impending election but anyone could feel it at any time. Kate had talked earlier about someone’s business. Suddenly it’s demonetized, and if you’re a small business, cash flow matters a great deal. What do you say to people who feel like they don’t really have the opportunity to react before the damage is done?

Daniel Lyons: Yeah, I think, in part, the answer is sort of what Matt was discussing earlier, right? Having more options out there, making more content available, I think, or more platforms available for content is part of the solution. So even if I’m demonetized on YouTube or on Facebook, I can go elsewhere. Parler was filling that role for a while before they found themselves knocked offline which, I think, raised some questions that not a lot of us had thought all the way through about levels of competition at different areas of the stack and the fact that the internet ecosystem is, in fact, far more complex than we had made it out to be in earlier policy disputes on the topic.

I mean, unfortunately, there’s not any easy solution. But the more opportunities there are to broadcast one’s speech, I think the less likely it is that one individual platform’s decision to censor or to take down is going to have a huge adverse effect on your bottom line.

Mark Jamison: OK, I’ve got one more question. (I actually have a lot of questions. We could do this for another hour and a half, but we won’t.) I’ve got one more question I’ll ask, then we’ve got several questions in queue from the audience that I’ll be going to.

And Kate, the last question is for you. So Matt mentioned earlier that President Trump and President Biden agreed on something: We should not have Section 230. What happens if we did not have Section 230? What’s the consequence?

Kate Tummarello: I think you only have Facebook and the other companies that can afford not only to hire moderators, build tools — whether or not they work effectively — and, of course, afford to survive lawsuits. So you end up with kind of the biggest of the big sticking around, they’ll be fine. And I imagine trial attorneys will have a heyday being able to bring court cases that they had not been previously. And I think you’d end up seeing a lot of, you know, smaller companies — even medium-sized companies that we think of as household names but don’t have the resources of the bigger companies — you’d see them changing their business models. I don’t think if I were launching a startup next week and 230 wasn’t in place, like I don’t — maybe the courts would figure it out, right? Maybe we could get some kind of understanding about, you know, sufficient knowledges under the law to be protected by the First Amendment or not, similar to how bookstore liability protections work. But that would take a while. And you’d see a lot of companies kind of get caught in the crosshairs. So I hope we don’t go to full repeal anytime soon.

Mark Jamison: All right, well, thank you very much. Again, I’ve got lots of questions left, but let me go to the audience question. And this question isn’t directed to anyone in particular so I’ll let you volunteer on it. It says, “Given the costs of a full repeal of Section 230, that those costs would be borne by small businesses — just like you were describing, Kate — is a feasible solution to kind of limit the protections to companies of certain size, that only small companies are protected, and the big companies have a heavier burden to carry with liability and risks? What are you guys’ thoughts on this? Because I see this a lot in proposed legislation. Matt?

Matt Perault: I’m happy to take a first stab, and then I’m curious what others think. So I find this to be a problematic policy tool, generally. As you said, it’s kind of increasingly the way that we go about thinking about regulation of tech companies. For most issues that are of concern, I think, for users of technology products, I don’t think size should matter for what those concerns might be. I don’t know why we would be concerned about our privacy rights vis-a-vis Facebook but we wouldn’t be concerned about them for a platform that has a million users.

And I think the same is true here when we’re talking about speech and harmful content. I think the rules that govern how we think about speech as a society, we should try to develop the right rules across a range of different platforms and not just focus on a small number of large platforms. I think there are people who would be concerned about speech on Facebook but would also be concerned, potentially, about speech on Parler or, you know, speech on a range of different smaller platforms. And in some ways, I think we might actually want to be concerned in some cases — when we’re concerned about things like security, for instance — about practices on smaller platforms. So I think we should come up with what the right model is and apply it across the board.

Even if we were to just narrow it, I think, to Facebook and Twitter and Google, for instance, I do think there would be concerns there about what would be the impact on user expression. We would, I think, want to — my view is that having open speech platforms, or as open as possible, is a good thing. Not because we like all speech but because typically, speech restrictions end up being problematic. And Section 230 is a vehicle for that and we want to have that on Facebook just as we want to have it on smaller platforms.

Mark Jamison: Anyone else want to address this? Kate?

Kate Tummarello: Yeah, I mean, I think a good motto is always, like, “The best policies don’t need exceptions.” The policy should just work. A couple of things about this question specifically — because we get it a lot, obviously, being the startup people — the first is that it’s really hard to get those thresholds right. I think people assume that if you have a lot of users, you have a lot of money. And that is definitely not true in early stages. So just because you have, you know, even several million users does not mean you can hire enough content moderators or whatever to be sifting through content in real time and catch the bad stuff. So getting that number right is a lot more difficult than it sounds. And you see this in basically every context. Like, whether it’s taxes or, yeah, even in privacy — sorry, my dogs are barking — you see this all the time, so that’s kind of part one.

And then part two is: I do think, to Matt’s point about downstream effects on expression generally, this does also have a small business angle here because we have startups who advertise on Facebook. That’s how they reach users. If Facebook was going to be held liable for every ad that got uploaded into their Facebook ad portal, they would need humans reviewing every ad. And they wouldn’t take on small clients like a startup that has a couple thousand users and a couple thousand dollars to spend. They are going to say, “No, go advertise somewhere else. We need to focus our resources on the big guys who give us lots of money.” So I think you end up reducing opportunities for everyone, including other small businesses. And so I think the threshold question, coupled with the fact that there are downstream effects, means that this is not the practical approach that, like, everyone wants it to be and it seems like it could be.

Mark Jamison: OK. Daniel, do you want to add?

Daniel Lyons: Yeah, and I think if —

Mark Jamison: I want to give the next question to you as well so go ahead.

Daniel Lyons: Yeah, I think one thing I know is that when you subsidize something, you get more of it, right? And so what this in effect would be doing is subsidizing small businesses at the cost of larger ones. And that strikes me as somewhat problematic, particularly in the field of social media networks where economies of scale are so important because these platforms benefit from network effects. In fact, the larger the platform it is, the better it is to users. So a legal rule that disincentivizes growing to an optimal size would effectively balkanize and make many of these services less useful than they otherwise would be — and, I think, would run the risk of fragmenting and separating us along political dimensions even more so than we already are. And that, I think, has an additional social cost to it.

Mark Jamison: All right, thank you. The next question sounds like something that you might ask a law student. So Daniel, I’m going to ask you what the answer might be. So the question is this: Section 230 is so controversial in part because it came from Congress as an immunity. What if the courts had continued to chew over cases — there’d been no Section 230 — and had come up on a rule of no liability rather than immunity? Would the courts have gotten to that result, or would there still be debate about liability unfolding in the courts — maybe a little better than the debate happening now in Congress? So “where would the courts have taken us?” is, I think, the basic question.

Daniel Lyons: It’s a good question. Obviously, counterfactuals are impossible to answer, right? But that is a path that we could have taken, and it is a road not traveled that might’ve been better. The common law would have evolved the way that we see it evolve in other circumstances. Because the law in this area is largely law, which is state-specific, you might have seen different decisions based on different platform or business models but also based on different states. And that might’ve allowed more experimentation and testing of optimal policies over time. So it’s sort of the classic federalism question: At what point do we think the benefits of uniformity outweigh the benefits of continuing to allow the laboratories of democracy to keep experimenting with different alternatives?

The downside to a common law–like approach might’ve been inconsistency of judgment and continued fractioning of the marketplace. And that becomes a little bit problematic when you’re talking about services that are operating nationally or internationally over the internet because it means your company has to create one set of rules for California, another for New York, something like that, or turn off the service in New Hampshire or something like that which at some point might be suboptimal. It’s not clear to me that we were at that point in 1996. We might very well have benefited from additional percolation in the lower courts before deciding as a matter of federal statutory law what the liability rules should be.

Mark Jamison: So, Daniel, you raised the fact that these platforms operate across multiple countries. Section 230 is a US law. To what extent does it really matter, or what do we learn from other countries not having it? You know, Canada doesn’t have Section 230. Mexico does not. Europe does not. So what lessons can we draw from other countries having different sets of laws, but yet these platforms still thrive in those jurisdictions? Anyone?

Kate Tummarello: I’ll jump in. There may be US-based platforms that operate in those places, but none of those places have their own platforms. Facebook was not built in Germany. And because I mentioned Germany, I should add that Facebook had to put a lot of people in Germany to deal with NetzdG (their online speech laws). You know, again, Facebook can make it work, and other companies of a similar size can figure it out. But there’s not zero costs to those laws. And, you know, every country obviously makes its own decisions and weighs the cost versus the benefits. And I’m sure there are good reasons for doing what they did, but it does make it harder to operate there. And it’s not like, yeah, you don’t see speech platforms popping up in other countries the way they do here.

Mark Jamison: All right. Matt or Daniel, do you have thoughts on that question?

Daniel Lyons: So this is similar, I think, to the debate that’s playing out in the privacy context where American business is much more friendly toward data monetization than we see in Europe. And some of the evidence that we’re beginning to see by looking at companies that are operating across borders is that the American business is subsidizing the European business to some extent: that the additional costs of privacy regulation under the GDPR are being borne by the additional benefits that you get from operating without those restrictions in the United States.

So yeah, we do see different liability rules. The story may be similar: that the additional cost of those liability rules in Canada or Germany can be paid for out of the lower cost production here in the US.

Mark Jamison: All right. So we have another question from the audience that is more fun than the questions I’ve been asking. It wants to know, and I’ll have each of you answer if you don’t mind: What’s your favorite misstatement about what Section 230 does? What’s the most important commentator, official, or news outlet that has gotten it wildly wrong? So you can point fingers twice: bad statement and someone who’s made it. Who’d like to go first? Matt Perault: I’m happy to take it on first. In terms of the favorite misstatement, I think there’s a lot of misunderstanding about how news publishers are supposedly treated differently by Section 230 from tech platforms. And that’s not the case; the statute doesn’t distinguish between news publishers and tech platforms. The focus, as I mentioned earlier, is on hosting content versus creating content. And in any number of different contexts, The New York Times or News Corp or benefit from Section 230 protections. And in any number of other contexts, a company like Google, or Facebook, or Twitter can be held liable when it is actually playing the role of a speaker. And so I think that that’s really critically important to understanding how the statute functions and the benefits that different kinds of entities get from its protections.

I’m going to take the flip of the second question about which commentators typically get it wrong and focus on which commentators typically get it right. There’s actually sort of a cottage industry of commentators who I think focus almost exclusively — no, not exclusively — who spend a lot of time correcting misunderstandings. So Jeff Kosseff, as, I think, Daniel mentioned in his opening, if you follow him on Twitter, on a regular basis, weekly, if not daily, he says, “Here’s an allegation about what Section 230 does. It does not do that; here’s what it actually does.” Eric Goldman, who all of us have mentioned, does a great job of this along with Emma Lanso at CDT and Daphne Keller at Stanford. There is a big community of people who, I think, focus really on trying to be clear about what Section 230 does and doesn’t do.

Mark Jamison: OK.

Daniel Lyons: Yeah, I’d add Mike Masnick to that list too. And I 100 percent agree that there’s a false dichotomy between platforms and distributors — or, I’m sorry, platforms and publishers. You’re a platform, and when you cross over to a publisher, somehow you lose your 230 liability. That’s not even a defense; that’s not even in the statute. There’s no statutory hook for that.

I think the error stems from this distinction that occurs offline: that there are different liability rules for publishers versus distributors of material offline. But Section 230 was explicitly designed to eliminate that. I mentioned to a friend that the publisher-platform distinction is second only, I think, to “you can’t yell fire in a crowded theater” as one of the most common misperceptions in the area of speech regulation.

Mark Jamison: Kate, your favorite misstatement?

Kate Tummarello: Yeah, I mean, the publisher-platform made-up distinction is a good one. But I think mine is the idea that Section 230 lets companies take stuff down. Like, absent 230, Facebook or Google would have to host everything. That’s just not true, right? We’ve talked at length in this panel about how 230 is just kind of like an efficient way of not drowning in court if you’re a company hosting user-generated content. And I think that’s one that especially Republican officials have really latched onto and tried to make the case that absent 230, absent this gift from the government, this government subsidy, you know, Facebook couldn’t ever moderate content, and that’s just not true. Mark Jamison: All right. One more question in that space, and then I’ll have a wrap-up question. The wrap-up question, so you can just think about it, will be: What do you think is the worst thing Congress could do, and what do you think is the best thing?

But Kate, before we get to that question, on the taking content down, so the concern is oftentimes that the term and conditions just aren’t clear. People don’t really understand how they’re being applied. How could that be improved?

Kate Tummarello: Yeah, I think more transparency and clear communication would go a long way — like I said, the Reddit of cats on their hind legs, right? No one is terribly shocked, I hope, when their picture of a dog on its hind legs gets taken down because the parameters are very set and very upfront. I do think it’s hard to write a super clear policy that’s future-proof. I think it’s probably impossible to do that. And, you know, the average Facebook user or YouTube user doesn’t spend every morning reading through The New York Times, The Washington Post, and how Google has updated the YouTube acceptable use policy. So, like, that’s probably never going to be perfect.

But I do think the companies, you know, when things are automated, should be making sure that there is the chance to have a human look at it, because we do know there are error rates for automated content filtering — being clear about when acceptable use policies are updated or how they’re being updated. But, again, it’s probably never going to be perfect, and we should just accept that speech is complicated and that there’s a lot of it online that there wouldn’t be absent Section 230.

Mark Jamison: We don’t understand the future. That’s why we call it the future. I get it now.

OK, so, Daniel, what’s the worst thing and the best thing Congress could do?

Daniel Lyons: I think the worst thing Congress could do is repeal 230 completely. I think you would probably wind up with much less user-generated content, and we would probably roll back to a kind of pre-YouTube, pre-Facebook model of client-server architecture where companies produce content and users are just passively consuming it. Because the liability for allowing every Tom, Dick, and Harry to talk on your platform just becomes too great.

The best thing that can happen, I think, is a very slow, careful consideration of where are the pockets of places where 230’s current liability rules are creating costs in excess of their benefits, and drafting a legislation that targets that — hopefully apolitically.

I think the odds of that are fairly low because although, as you indicated, there’s a consensus that 230 should be changed, there’s a huge divide on which way, right? Democrats want more moderation, Republicans want less. That said, I think Hawley’s proposal is an interesting one. He’s got a number of bills on this, but one of them gets at the idea that you as a company get your 230(c)(1) immunity — no liability for user-generated content — if you agree as part of your terms of service to a real, good faith commitment not to take down content on the basis of political identity or whatever, right?

Mark Jamison: Yeah. Daniel Lyons: And that would make it more a matter of achieve by contract, the kinds of protections that I think large chunks of the right are hoping to achieve. There’s potential for that. It’s certainly a problematic proposal but it’s a platform on which, I think, folks could grow. Pardon the pun.

Mark Jamison: All right, so, Matt, we’re down to two minutes. Very quickly — worst thing, best thing?

Matt Perault: Yeah, I agree on repeal being the worst thing, particularly because I think a set of reforms in front of Congress right now including repeal but not limited to repeal really would strengthen the hand of Big Tech companies and make things more difficult for smaller companies. I think the best thing they could do would be targeted reform, taking advantage of the existing text of Section 230. So as Daniel said at the outset, a case brought under federal criminal law, a platform cannot use Section 230 as a defense in one of those cases. As a result, there’s robust room for thinking about what things could Congress criminalize that would then target some of the speech areas that we’re concerned about. I think online incitement to riot is one. I think another is voter suppression and voter fraud.

Mark Jamison: All right, thank you. Kate, you get the last word.

Kate Tummarello: All right. Yeah, worst would be total repeal. Best would be a really thoughtful consideration about how opportunities for speech online have benefitted small businesses, marginalized voices, people who wouldn’t have a chance to speak or speak and distribute their speech. And so, you know, making sure that thought is first and foremost when thinking about 230 repeal, I think, would go a long way to guiding Congress to a better place.

Mark Jamison: All right, well, thank you everyone for all your contributions. You’ve given a lot of people a lot of food for thought. And hopefully we’ve provided people with some good insights that, as this gets debated in Congress, gets debated in different other policy circles, that people think better about it. So thank you to the audience as well. This is Mark Jamison; we’re just very glad that you were able to join us. Thank you very much.