American Enterprise Institute

Should we reform ?

Welcome and introduction: James Pethokoukis, AEI

Panel discussion

Participants: , Boston University Jim Harper, AEI Jeff Kosseff, US Naval Academy

Moderator: James Pethokoukis, AEI

10:00–11:30 a.m. Friday, September 6, 2019

Event Page: http://www.aei.org/events/should-we-reform-section-230/

James Pethokoukis: I think we’ll get started. If anybody else shows up late, well, they showed up late. And they can just barge through that door.

Good morning. I’m James Pethokoukis of the American Enterprise Institute. And this fantastic event you’ve chosen to attend, watch online, or view on C-SPAN is “Should we reform Section 230?”

And let’s start things off with an interpretive reading of Section 230 of the 1996 Communications Decency Act. It is not a long section. It is brief, but important. And it states as follows: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Now, for more than 20 years since the birth of the internet age as we know it, Section 230 has provided websites with immunity from liability for what their users post. The Electronic Frontier Foundation describes the importance of Section 230 this way: “This legal and policy framework has allowed for YouTube users to upload their own videos, Amazon and Yelp to offer countless user reviews, to host classified ads, and and to offer social networking to billions of internet users. Given the sheer size of user-generated websites, it would be impossible and infeasible for online intermediaries to prevent objectionable content from cropping up on their site. In short, Section 230 is perhaps the most influential to protect the kind of innovation that has allowed the internet to thrive since 1996.

“And yet, over the past few years, Section 230 has faced heightened criticism from both the left and the right. Many critics on the left argue that giants enjoy special protection without any responsibility. They allow speech without worrying about its consequences, resulting in harassment and the increased prevalence of online. Then other critics on the right have argued that these companies have gone too far in policing for these vices or at least have gone too far in policing conservative voices and enacted a biased double standard when it comes to content moderation. In light of these criticisms and aligned with the bipartisan political backlash against , we’ve seen calls to reform Section 230.”

And it is these calls, and this debate, that we’re going to discuss today.

And I often use the phrase “all-star panel.” This time, I mean it especially, this is an all-star panel. Immediately, to my right is Danielle Citron, author of the 2014 book “Hate Crimes in Cyberspace.” She’s also a professor of law at Boston University School of Law, where she teaches and writes about and free speech. Next to Danielle is Jim Harper, a visiting fellow at AEI, where he focuses on privacy issues and select legal and constitutional law issues. And over at the end is Jeff Kosseff, assistant professor of cyber security law at the Naval Academy’s Cyber Science Department and author of the recently released book, “The Twenty-Six Words That Created the Internet,” also a podcast guest on the AEI “Political Economy” podcast. Welcome all.

Let me start with Jeff. What do we know for sure — or we know reasonably for sure — was the intent behind Section 230? Why did legislators feel the need to bring it into being?

Jeff Kosseff: So there were two main reasons when you look at the legislative history and when you talk with the people who were involved in drafting and advocating for Section 230.

The first was there was really a concern at the time about — which it’s kind of quaint — legal pornography that was available to children online. That was the biggest concern about the internet at the time. And there was a series of court rulings under the common law and the First Amendment, which basically stood for the proposition that if you did not moderate content, you could actually increase your chances of being protected from liability for all user content. But if you started to moderate, then you could expose yourself to liability for and other types of harm. So there was really a concern then that this system was giving an incentive for platforms not to moderate. So that was one driver of Section 230.

The other was — you have to think about the time. It was 1995. The modern internet was really at its infancy, and Congress wanted to get regulation and lawsuits out of the way of the development of this new technology. So between those two reasons — those were really the driving forces behind Section 230.

James Pethokoukis: I sense — and again, everyone can sort of jump on this. You don’t need me to call on you. I probably should have mentioned that earlier. There seems to be confusion about that original intent — I think what you’ve tried to dispel that with your book and other parts of it. What is the confusion? That seems fairly straightforward, and I know that you’ve talked to, you know, people involved, staffers and legislators. There seems to be confusion about that original intent.

Jeff Kosseff: Do we have all day? There’s a lot of confusion.

James Pethokoukis: In fact, we do not.

Jeff Kosseff: And I’ll say I’m only speaking on my own behalf.

James Pethokoukis: There’s a lot of myths around this. You don’t have to hit them all, but just that sort of narrower issue of, you know, what was the point of this to begin with.

Jeff Kosseff: Okay so, speaking just on my own behalf, not on behalf of DOD.

James Pethokoukis: Oh, sorry. I should have mentioned that.

Jeff Kosseff: I should say that there’s a lot of confusion. I’d say the biggest bit of confusion right now is that Section 230 is conditioned on neutrality, that it only applies to neutral platforms, whatever those might be, rather than publishers. The point of Section 230 was to eliminate that distinction and allow platforms or really online services at the time to exercise discretion, figure out what their users should or should not be exposed to and not face the liability. So I would say that’s probably the biggest myth.

James Pethokoukis: And — but, again, the issues that were sort of foremost at the time are issues, again, about pornography and defamation. They weren’t so much about politics and political views, were they?

Jeff Kosseff: Not at the time. I mean, there was a statement in the findings for the bill that said something along the lines of wanting to foster political discourse. So, I mean, there was a recognition that there would be free speech online. So to that extent, there was, but there was no conditioning of neutrality on Section 230’s protections.

Danielle Citron: But I think we’re forgetting the title of Section 230C, which is protecting — the idea was it would apply to Good Samaritans who were protecting against — who were blocking and filtering offensive speech. So, absolutely, the idea was to incentivize self- monitoring, to provide protection for those who are blocking and filtering offensive speech — offensive speech, which is completely legally protected speech. But we wanted to incentivize —

James Pethokoukis: But that word “offensive” —

Danielle Citron: That is in the subtitle of that.

James Pethokoukis: But what did people think they meant by offensive back then? It’s very different right now.

Jeff Kosseff: So you quoted C1, which are the 26 words. C2 allows good-faith efforts to block access to a list of types of content, including other objectionable content.

Danielle Citron: Harassment. They have harassment in mind, right?

Jeff Kosseff: Yeah, exactly. I mean, Congress wanted to give a very wide range of discretion to these platforms to feel free to block content and not be exposed to liability.

James Pethokoukis: And just kind of — we need a brief legal point. Were they not already protected under the First Amendment? I think I get that a lot —what about the First Amendment? Does that not protect these companies? And why do they need this sort of extra protection?

Jeff Kosseff: So that was the problem that led to Section 230 in the first place. There’s a lot of different First Amendment protections, but one is for distributors of content that others create. And the rule there is if you’re a distributor, you can only be liable for other people’s content if you either knew or had reason to know. But the problem became that the court started saying if you start to moderate, you don’t even receive that protection, and you become strictly liable, just like a newspaper would be for its letters to the editor. And that was the weird system that was created by the First Amendment.

Jim Harper: And it wasn’t so much the First Amendment as it was common law developed — that is, the courts were looking at this new medium and coming out in different places, whether a platform provider was a publisher or was a speaker. In my opinion, the law kind of jumped ahead to what I think was the right result, but it’s unfortunate that we got there through legislation rather than through continued common law development. I think it would have taken years, but then again, years later, we’re here talking about it. If the rule had been generated from the bottom up through experience, I think it might have been a stronger rule and a better-understood rule and one that’s based more deeply in practice rather than getting this over-the-top legislation, which I think was pretty good, but maybe not perfect. So we’ll see.

James Pethokoukis: And given that and just a little bit on the history, how controversial was it? Or did this thing just kind of slip through — classic case where people kind of didn’t know, understand all of the consequences that would sort of flow out from it, or was there much real debate at some point in the process?

Jeff Kosseff: So there was a little bit of floor discussion. There was no real opposition on the floor. It passed. It was attached to the Telecom Bill in a 420–4 vote because this was part of the much broader overhaul of telecom in 1996. And Section 230 was really seen as an alternative to the Communications Decency Act, which both of those ended up being put in the Telecom Act. The Communications Decency Act imposed all sorts of prohibitions for speech that was indecent, and the Supreme Court struck that down.

So Section 230 received barely any media coverage at the time. And when it did receive media coverage, there was no coverage of the fact that it was going to shield platforms from liability for most types of user content. So it was really an afterthought at the time.

James Pethokoukis: I’m curious. So we have this law that shields internet companies, big platforms. How does it work in Western Europe, for instance? Do they have their own version of Section 230?

Jeff Kosseff: So they have rules. I mean, Europe has a number of different rules. I write about one of the cases in the book. But it varies a little bit, and it gets into the classification, whether it’s a conduit or some other types of service. But there are some cases where the courts have said not only do you have to take down user content if you get a notice complaining about it, you also might have a duty to actively patrol for user content that might violate not just defamation laws, but hate speech laws and things like that. So it’s much more restrictive in really most other jurisdictions than it is in the United States. We’re very unique in that respect.

Jim Harper: Maybe unintentionally, Jim, you said it protects big platforms, but it also protects —

Danielle Citron: Everybody.

Jim Harper: — small platforms. And my thinking on 230 is animated by operating a very small platform for a number of years. I ran a site called washingtonwatch.com, which was a government transparency site. Don’t go look there now. There’s nothing but broken dreams.

And when Web 2.0 came along, I changed Washington Watch to Web 2.0 style. You all remember that phraseology. That’s when people actually started to be able to comment and vote and add their own material.

So we had a comment section. I had a little success. There was a bill to extend welfare benefits, unemployment benefits, and it had something like 200,000 comments. And I was the only guy in my spare time watching what was going on on the site. And, of course, there were people behaving horribly to try to undermine the conversation. There were people who were telling me that someone else was using their name. And I was called on to moderate this situation. I wasn’t consciously using Section 230. I, you know, was just trying to do my very best to create a good environment.

But it’s extraordinary, the lengths that people will go to to, in my case, undermine the site. Because they thought I was in favor of this legislation when, in fact, it was a neutral platform — or meant to be. Unfortunately, the internet, for all its blessings, allows people to be truly wicked from behind that anonymity. And they will take every effort to upset each other.

They’ll take every effort to undercut the platform itself. So it’s an area that’s fraught, and 230 is a very important protection given what people do.

Danielle Citron: But it’s not just that — to make them upset. People will use network tools to destroy people’s lives, to post nude photos of them, to ensure that they can’t get a date, keep their jobs, get a new one. They will, you know, threaten, death and rape. They post defamatory information that one can never sort of respond to, which is, “I’m not a prostitute. I don’t have a sexually transmitted infection.”

So the kind of mischief that we’ve seen isn’t just like hurt — things that hurt people’s feelings. It’s not just offensive speech. It is speech that we would say isn’t even legally protected, that we can criminalize and ensure liability for. And there are sites that make a business out of hosting nonconsensual pornography. And they get to, you know, with glee, say, “I can encourage people to post others’ nude photos. Sue me, good luck. I’m immune from liability.” And people’s lives are ruined.

So I think it’s just important to note that, you know, the mischief isn’t — and I’m with you because I helped run Concurring Opinions. And so we were busy filtering, you know, crazy comments, too, on our sort of legal theory blog. But it’s far more than that. Like any wonderful tools for great things and also for ill, we see a whole lot of illegality and destruction of people’s lives.

James Pethokoukis: Indeed. So we have this piece of legislation, not particularly controversial, now very controversial, and as I alluded in my remarks, for very different reasons, from, you know, different perspectives.

But just one second before we get to that. Because, Jim, you mentioned that perhaps some of these protections could have evolved sort of more organically rather than legislation. But given how the internet sort of developed, if that Section 230 had not happened, what might the internet look like today? And how might it have evolved without it? Since now we’re so interested in changing it and reforming it. If we didn’t have it right now, you know, what does the internet look like?

Jim Harper: You have an argument on that, I think. It’s in your book.

Jeff Kosseff: Yeah. So I think I’m a little skeptical that the common law would have developed to the extent of the protection that 230 provides. I think some of the early court cases were not very well reasoned or well decided that really imposed significant liability.

But I don’t think that the really broad protection that Section 230 provides for third-party content would have been the same under common law. I think there probably would be more of a “notice and take down” type system, where if you get notification of potentially illegal content, your choice is either take it down or you have to stand in the shoes of the person who posted it and then defend it, which most rational platforms are not going to want to do because they don’t want to litigate all these defamation cases. So I think you wouldn’t see necessarily the same extent of social media, of Yelp, all these services that really, not coincidentally, are based in the United States.

James Pethokoukis: I mean, people often will wonder, “Why, gee, where are all these sort of social media giants in Europe?” I mean, do you think 230 is a big reason, again, why we have them and they don’t?

Jeff Kosseff: Very much so. I mean, I think that they were able to develop in the way that they currently are because of Section 230. We would probably have different types of services, but they wouldn’t have the same sort of rules that they have right now without Section 230.

Danielle Citron: There are big social media companies, like in China, but it’s complete censorship and control. So it’s not like we wouldn’t have these companies. They would just look — you’re right, the whole —

James Pethokoukis: And people would just be, what, posting, you know, dog videos and their kids.

Danielle Citron: Pretty benign.

Jim Harper: I think it’s right that we got the big social media industry we did because of 230. Is that right or wrong? You can also conceive of 230 as a sort of subsidy, a liability-free subsidy, the same way the corporate form of organization is a subsidy to businesses that comes at a cost to consumers in some degree because there’s a little bit less liability for corporations. A little bit less liability for social media platforms meant we have a huge social media industry. It came at a cost, though, in victimization of select people in some cases that are very sympathetic, so.

Danielle Citron: And some of those negative externalities include speech. So when you’re targeted with harassment online, you are silenced so often, and we forget that those negative externalities aren’t just lost jobs and potential physical violence, but also speech.

James Pethokoukis: And sort of the externalities — I mean, it seems like it’s suddenly a very controversial topic right now. You know, we’re talking about different ways to regulate Big Tech. But those sort of externalities and sort of victims, they were there really from the very beginning.

And I know in the book you write, there’s, you know, several interesting cases of people who are actually hurt, you know, extremely sympathetic stories, but yet, were there any calls back then, “You know, listen to this is generating victims, we need to do something.” Or did it pretty much continue to its present state, sort of, you know, ignored by policymakers?

Jeff Kosseff: I think for the first decade, it was not really a hot-button issue and even into second decade it wasn’t. And there were some tough cases.

Danielle Citron: But what about like Sen. Lieberman? You know, the idea that he wanted to pressure and YouTube to take down terrorist videos that were visible, like, 2004? And that just gets ignored. I mean, so there weren’t — was that sui generis, Jeff? Because I only knew about those efforts at the time —

Jeff Kosseff: I guess what I’m saying is the public attention to it was not nearly the same. And there were really tough cases in the beginning. So there’s a lot of people who argue,

well, there have only been difficult cases in the past few years. The second case ever litigated under Section 230 is a case called Doe v. AOL, where a mother sued AOL because there was with her son, her teenage son, in it, that AOL — was being marketed in an AOL chat room. And she kept contacting AOL, and they wouldn’t prevent it. That’s a difficult case. So it’s not just, like, there were only easy defamation cases in the early stages. There were tough cases all the way from the early days of Section 230.

Danielle Citron: Even though it’s child porn? You know, so that it falls outside of 230? You know, federal, criminal law doesn’t enjoy that immunity.

Jeff Kosseff: But she was suing for negligence.

Danielle Citron: Okay. I got you, but important to clarify for everybody.

Jim Harper: Another important dimension, the problem is the changing technology over time. In the early days of CDA 230, maybe you had manual processes entirely. Nowadays, a thing like child porn, you can kind of automate tracking, and those systems are in place, where you get a hash of known child porn files, and you can review anything.

There’s a case in the 10th Circuit where AOL found child porn and delivered it to NIC MAC, which was the government actor for those purposes. It’s called Ackerman by Justice Gorsuch. Now, Justice Gorsuch wrote the opinion. It’s a very good and interesting opinion on its own. But automated review for things that are self-evidently criminal seems like an area where you might be able to hold platforms liable. Self-evident, though, is important I think because if you’re taking defamation or stalking or something like that, it’s really hard for a person coming in for the first time to a conversation to figure out whether someone is defaming another. What’s the reality of it?

I did have a person contact me and say, “Someone else is using my name.” I’ve written a book on identity. I know there are names, even rare ones, that are repeated many times around the world. So I was dubious of the argument. And I couldn’t bring myself to take something down as an identity fraud, if you will, because the same name was being used as another person out there in the world.

I don’t think that the people who are faced with these decisions, like myriad different kinds of problems and stories coming to them in the bowels of the social media organizations, are going to be able to negotiate and navigate those kinds of determinations very readily. So self- evident wrong, you might have a liability appropriately attached, but things that aren’t self- evidently wrong, I think you probably shouldn’t have liability for those platforms.

James Pethokoukis: So these cases would come up, and there’d be these stories. But generally this is sort of left alone, it sounds like, to legislators, and sort of not anymore. Is it the case just that, you know, you have these sort of big platform or companies, which, you know, have been in the news and the sort of why now? Why is this now this incredibly controversial issue? Was there a triggering event, or is it just that and also big and intrusive in our lives, we’re just taking a closer look at all these companies and what regulations and laws apply to them?

Danielle Citron: I think it’s like a couple of things coming together all at once. I feel like we could all probably take each thread and bring it together.

Jeff Kosseff: I mean, people don’t like the tech companies as much as they used to. I think that’s not a controversial —

Danielle Citron: Yeah, here we are, Cambridge Analytica, is testifying, Dorsey, Sandberg, they’re getting pummeled. It’s a wonderful opportunity for every senator and congressperson to sort of hop on the bandwagon and saying, “You are the antichrist,” you know — sorry, forgive me, I transgressed, religious norms. You know, you bear full responsibility of all mischief online. And it was sort of an easy potshot. And then so what else can we throw in the mix? We have Cambridge Analytica, and the attack on our country from Russia and disinformation — and then therefore, blaming the platforms for not doing anything about it when they were caught unawares and off guard — the advent of deepfakes. You know, we have — I don’t want to take up the thunder. I feel like we all want to pitch in something, you know.

Jeff Kosseff: I mean, I would also say that one thing I’ve observed is that the traditional news media perhaps is not very happy with the business models of some of the large platforms because of what they’ve —

James Pethokoukis: And it certainly seems like one criticism is that this enables those very business models, and that it’s not just about, you know, moderating and shielding these companies, but it’s as a key part of why, you know, these companies are worth $500 billion or a trillion dollars.

Danielle Citron: And they’re not nascent. They are gauged in surveillance that, you know, earned them tremendous amount of money, and they are stealing the business of news companies. Well, right, Jeff, it’s what you’re saying, or in the mind of big media.

Jim Harper: One problem, which is familiar to the students of regulation, is that if you were to deeply undercut Section 230 protections, the biggies would be able to navigate the new law and the little competitors that are coming up to try to challenge them would not be able to navigate the new law cost-effectively. And so, like regulation typically does, it would lock in the status quo in the marketplace. And so, Facebook’s natural decline would be delayed, if not foreclosed, by doing away 230 or deeply undercutting the protections of 230. It’s there as much for the small competitors as it is for the big platforms.

Danielle Citron: I have an approach that wouldn’t do that, but we can talk about that later, about what that might be that would treat the big platforms and the smaller platforms differently. We don’t have to get there yet, but —

Jeff Kosseff: I would also say that the larger platforms have lacked some transparency over the years. They have started to change. But, I mean, it was often like a black box to figure out what’s going on in there, how are they making these decisions?

James Pethokoukis: It seems to me that’s particularly a complaint among conservatives who think that companies are biased against them, and they don’t see — “Well, I don’t see why that person is being taken down, or demonetized, or shadow band, and why not that person? And if this has happened to me, you know, do I call an 800 number? What do I do?” Or I think transparency and all this, they don’t understand the process. But when I went on,

of course, Twitter, to discuss this, there was statement that Section 230, you know, it’s terrible. Basically, what it does, it enables hate. It enables hate and harassment.

Danielle Citron: But it also enables great things, too.

James Pethokoukis: But you’ve written about it, you know, on many of these problems.

Danielle Citron: I have. And that’s right that the free pass, that if we don’t —

James Pethokoukis: In your fine 2014 book, again —

Danielle Citron: Yes.

James Pethokoukis: “Hate Crimes in Cyberspace.”

Danielle Citron: Oh, thank you so much. And in the work that I’ve done with this, in the Fordham Law Review in 2017, we sort of proposed a way in which we can fix the statute without doing too much violence to all the great things that it does. Because on the one hand, Section 230 provides this safe harbor, which isn’t conditioned on any good behavior of good faith when it comes to under-filtering. When it comes to over-filtering, we condition that on good faith, but for under-filtering — the part of the statute you read aloud is 230C1. It doesn’t condition that immunity on any good behavior. It just says you have this immunity; we’re not going to treat you as the publisher or speaker of somebody else’s content.

And what that has done, and been interpreted really broadly, to mean that even websites that encourage nonconsensual pornography, that entire business model is abuse, essentially. Sites that, you know, cater in deepfake sex videos, they get to enjoy this immunity from liability, even though they are not engaged in any kind of content moderation and, in fact, are encouraging it and making money off of it. And so, can I talk about my proposal?

James Pethokoukis: Not yet.

Danielle Citron: Not yet. No, that’s why I’m thinking — I want to make sure.

James Pethokoukis: This is like a stay-tuned moment. No one tunes out because the big proposals are coming. But, I mean, I think people —

Danielle Citron: I don’t want to get rid of it, needless to say. I think Section 230 has been incredibly important. It was incredibly important for these early years in which we do have the Yelps, we do have Facebooks, we do have incredibly pro-social activity online. It is now —

James Pethokoukis: That’s not what we are hearing, though. It seems like all these positive things, that most of the stories that you hear are about —

Danielle Citron: We have Wikipedia, let’s say. That’s pretty damn pro-social.

James Pethokoukis: But you hear about, again, whether it’s actual harassment, whether it’s hate speech, where people are uploading really violent videos, that these companies have

been given a lot of power, and they’ve gotten very big. They show absolutely no responsibility. That’s the criticism.

Danielle Citron: And that’s (a) not true.

James Pethokoukis: There’s a classic, “With great power comes great responsibility.” Where’s the responsibility?

Danielle Citron: No, it’s power without responsibility. But what’s interesting is that where are those people — the folks I was writing about who were victims of cyberstalking, nonconsensual pornography, where were they? Companies were actually — there were all these politicians that are now so worried about hate harassment. They don’t really care. Let’s be honest. All these years that I’ve been writing about it, no one says a word. And it’s only when it’s sort of like when it’s your ox that’s being gored that they care about it, when they think, “Oh, political speech is being marginalized,” which, empirically speaking, I’m doubting is true.

But the companies actually, the largest platforms, have been working on these issues, and I’ve been working with them, Facebook, Twitter, Microsoft, for years. They haven’t done a terrific job. They’ve been trying. They’ve gotten better at it, as it’s been bad for business. You know, why does Twitter take a stand against stalking and threats and nonconsensual pornography in 2014? We had Gamergate. And advertisers started pulling their money. Same with Facebook. If it’s bad for business, they’ve long tried. They haven’t done a great job, but they’re making efforts.

James Pethokoukis: These are not new companies.

Danielle Citron: No.

James Pethokoukis: They’ve been around for a while. And I think a lot of people would just think they would have better policies, more aggressive policies at this point, and that speaking that came that, you know, maybe the politicians really didn’t care, but maybe they just didn’t really care that much —

Danielle Citron: Oh, yeah. I mean it’s like a will —

James Pethokoukis: — until it became really a big issue in the public consciousness.

Jim Harper: One of the things that I think is important to highlight is that a lot of the worst wrongdoing that happens online has a wrongdoer who’s on the other side of the platform doing the wrong thing. You take something like or something like that —

Danielle Citron: They’re perpetrators.

Jim Harper: You know exactly who the perpetrator is. And they’re the one who’s the actual wrongdoer. And maybe you can divide it up in percentage terms, but if they hadn’t done what they did, the platform wouldn’t have any of that material on it. There’s a case that I think was dealt with in the Second Circuit recently, where it’s unclear that they actually saw this, as the case. You probably know better than I do.

Danielle Citron: Yeah, of course.

Jim Harper: It’s unclear that they did very much to try to stop the actual bad actor.

Danielle Citron: And Grindr did literally nothing and designed their site so they made it impossible.

Jim Harper: Very well and good.

Danielle Citron: So they said, to find his perpetrator.

James Pethokoukis: Maybe spend 30 seconds saying what that issue is, what that case was.

Jim Harper: So, Grindr is a hookup site, and a jilted ex-lover adopted the identity of the other and invited I think something like thousands of men to come —

Danielle Citron: A thousand of men to his house.

Jim Harper: — to his house, you know, offering sex —

Danielle Citron: Because he posted fake ads saying that he was interested in anal rape —

Jim Harper: Really awful behavior.

Danielle Citron: — and provided the victim’s home address. And so over 1,000 men came to this man’s house because —

Jim Harper: Really, really awful behavior on the part of a known individual.

Danielle Citron: But Grindr says, “We can’t” —

Jim Harper: And Grindr opted not to do anything, but the known individual, was he — I think there was a protective order —

Danielle Citron: Law enforcement.

Jim Harper: — against him but very little else.

Danielle Citron: That’s right, was not enforced in law enforcement. So I’m not the lawyer in the case, but I have written about the case and know the lawyer well who represents the victim.

Jim Harper: And I think if law enforcement failed to act —

Danielle Citron: And law enforcement says nothing —

Jim Harper: But I think that’s a law enforcement story more than a Grindr story. The attractive defendant, of course, it’s Grindr, and not the probably penniless perpetrator’s wrongful acts.

Danielle Citron: But I’m not sure we should feel so badly for Grindr. Because Grindr tells Michael Herrick that it has no capacity to identify anyone on their site, and they cannot prevent people from reappearing, which is nonsense. Let’s talk about as a technical matter.

Jim Harper: Oh, I think it might make sense.

Danielle Citron: Its inability to trace IP addresses is a design choice.

Jim Harper: Right. But people’s ability to avoid IP tracking is pretty readily used. I think if you were required to figure out when and where a person was making fake accounts on your own platform, you’d have a hard time doing it. It would take a lot of resources, and if someone is dedicated to defeating you, you got to put a full-time employee against any individual that’s trying to do a wrongdoing on your site.

James Pethokoukis: Do you think it’s fundamentally a 230 issue?

Danielle Citron: Oh, but it was because the client sues 230 —

James Pethokoukis: Where it’s lacking, yeah, so where 230 is not doing the job.

Danielle Citron: I mean, because I think ultimately Grindr says, like, “We chose not the design our site in a way that would make it at all easy to block people who are doing mischief.” And you know what? Twitter, they can block mischief-makers. Facebook, it’s not that technically difficult. They design their — Jeff, you join me if I’m wrong about this, but, you know, Grindr’s business model, they said, would be disturbed by changing the architecture of their site. And I just think that, you know, given how it’s been misused strikes me as — they’ve created something that’s ultra-hazardous, and they walked away and don’t care.

Jim Harper: I’ll add quickly. A lot of us in the privacy community want surveillance to be pretty hard, even for the corporations that run the sites. So it’s not necessarily a bad thing that there’s a platform that makes it hard to figure out who’s who. So this is a tough balance to strike. And I’m not sure that Grindr was wrong when the actual perpetrator of the wrongdoing was not entirely left alone, but not pursued as readily as the platform.

Jeff Kosseff: So I read the cert petition in the case, which is, I think, going to conference in October in the Supreme Court. I think it probably has the best chance of getting granted of any Section 230 cert petition because it makes a really unique argument about product liability rather than the — because I mean, Section 230 is really clear treating as a publisher or speaker. I think there’s a reasonable chance that the Supreme Court would take its first- ever Section 230 case. And that’s a really big deal because Section 230 — we rely on a Fourth Circuit case, which was the first opinion to ever interpret Section 230. And everyone’s kind of taken that as the how you interpret Section 230. And this, I think, is the biggest challenge that I’ve seen yet to getting it to the Supreme Court.

James Pethokoukis: So one reason this is sort of in the news, we have sort of more high- profile examples of harassment. We have hate speech online, and the companies just — and I think given this ability to moderate, and they’re not — you said that you’ve worked with some of these companies. Do they feel like they’re being treated sort of poorly by the media? You know, that all their good efforts — it’s a lot harder than what you think. All these

policymakers, they understand the technology. They understand how difficult it is. You just can’t come up with a better AI or hire, you know, 5,000 more moderators and the policy’s solved. They feel like people just don’t get the difficulty of the kind of moderating at scale.

Danielle Citron: Scale is a huge problem. So, for any given grievance, it’s really hard at scale to get it right all the time. But I think there’s some recognition that Twitter knows that — and this is just speaking from not confidential information — that up until 2014, it wasn’t doing the right thing. It had a very hands-off approach when it came to threats and stalking and nonconsensual pornography. And it’s sort of changed its direction.

You know, Facebook has long banned harassment and stalking and bullying. And I would like them to be more transparent. They have incredibly complex rules internally. So inward- facing, they have guidelines that are — I mean, we take days to read all of it, but not public- facing. So that while they say they ban hate speech, they don’t define hate speech. They say they, you know, ban harassment and stalking. Define it for us. Explain to us why you ban it, and give us examples and tell users why you’re banning them and give them a chance to respond.

So, you know, are they doing better, you know, since 2014? They are. Are they doing a great job? I don’t think so. I’m on Twitter’s Stress and Safety Task Force, and I’ve been working with Facebook on their nonconsensual intimate imagery and with them for 10 years, but I do it in a way I don’t get paid, so that I can be honest with them and I can talk about it publicly, not based on confidential information. But it’s important to — you don’t get paid, then you can be, as an academic, you’re free to say, like, “I can criticize you, and you’re not doing a perfect job.” But by my lights, Facebook and Twitter are like virtuous compared to the revenge pornographers.

So as we think of these things as a scale, bless Twitter and Facebook, Microsoft and Google, YouTube, for at least trying a little bit. Because you have sites whose business model is abuse and who their entire ability to make money is contingent on encouraging users to engage in nonconsensual pornography, threats, and defamation.

Jeff Kosseff: I would also just add that moderation is really hard at the scale that they’re having to moderate. So we can talk about, you know, what’s a hate speech policy for a large platform, but when you’re getting this flood of hate speech or other potential violations all the time and you have contract moderators — I encourage everyone to read , just had a great series of stories about what the life of social media moderators is like. It’s not very good.

Danielle Citron: And Sarah Roberts’ new book, let’s plug that.

Jeff Kosseff: Yeah, absolutely.

Danielle Citron: “Behind the Screen.” Always happy to plug my friend’s books.

Jeff Kosseff: So, I mean, when you have people — I think they make $30,000 a year, and they’re being exposed to the worst possible aspects of humanity — having to apply these detailed policies, there are going to be mistakes that are made even in terms of whether it violates the policy. And there are subjective judgment calls. So there’s no easy solution here.

Nothing that Congress does or doesn’t do is going to solve the problem that we have right now.

Danielle Citron: And it’s not just the United States, the European Union. So I’ve written a lot about the pressure that EU Commission has brought to bear against the major platforms, bullying them into taking down hate speech within 24 hours. And it leads to sincere and serious censorship creep because their terms of service, they use them globally. So if we’re seeing censorship creep due to pressure from Europe, it ends up here. So it’s not just, you know, conversations within the United States. We’re seeing pressure come from Europe as well. So these companies are global.

James Pethokoukis: And something we sort of touched in the beginning where I’m sort of outlining the problems is, you know, they have people on the right who are absolutely convinced that these platforms are, speaking of censorship, censoring political viewpoints that they disagree with. And as part of that argument, they’ve said, “Listen, Section 230 was established with the intent that platforms would be neutral, and there’s no difference between platforms, and there’s a difference between platforms and publishers.”

When I went on Twitter again tweeting about this, someone said, “Listen, they have to choose. Are they a platform? Are they a publisher? They need to fulfill their 230 responsibilities.” That’s one of the myths that sort — so I guess, two questions that all can answer. One: Do they think that, especially the big platform companies, are suppressing conservative/Republican speech? And sort of, two: What about that platform versus publishers distinction?

Jeff Kosseff: I mean, I could just answer no. There’s none. But I think to get a little more detailed here, there is not a distinction in Section 230. But this is also a law that Congress passed. So whether there is and whether there should be is a different question. So I mean, Congress could very easily say that there is going to be a neutrality requirement, and then we can debate the merits of that. And I think that’s a debate that we can have. But I think some of the debate has gotten conflated in terms of what Section 230 does say and what some people think it should say.

Jim Harper: I’ll open the debate on whether there should be neutrality. There’s no such thing. You can’t administer a neutrality rule, so don’t even try.

James Pethokoukis: Well, yeah, again there has been —

Danielle Citron: [inaudible] the telephone. No, like, you’re a conduit, so like, there’s no monitoring.

Jeff Kosseff: I mean you could have no moderation at all.

Danielle Citron: It could, right? No? Just asking.

Jim Harper: Electric wires are neutral. Platforms are not. They have with the way they work and the way their users use them, and that’s not neutral in any administrable sense.

James Pethokoukis: Well, you were, yeah, referring perhaps to a piece of legislation by Sen. , Missouri, where the commissioners from the FTC would be appointed.

They would have to certify that these platforms were politically neutral, if they want to get those Section 230 protections. It would apply, you know, almost I think specifically to the very largest platforms. One — I guess, you’ve sort of indicated already — one, is that a good idea? And if you did think it was a good idea, could you actually do it? Like, how would that actually work if you want to certify these platforms as being politically neutral?

Jim Harper: Oh, you would have to have a federal speech agency. That concludes my answer.

Danielle Citron: No, but keep going.

Jim Harper: It’s absurd on its face, given our traditions and our Constitution, to try to have any kind of federal body determine whether a platform or any speaker is neutral and fair. We had a fairness doctrine at the FCC — RIP for a very long time, hopefully. I think that that debate is over. That ship has sailed. It’s come and gone. Yes, of course, Congress will talk about it like it could happen, but it would die in the courts if it got through the legislature.

James Pethokoukis: Jeff, is that unconstitutional?

Jeff Kosseff: I mean, I think it would have to be tested. I think it probably would be. And I also think, even beyond the constitutional issue, practically, I mean, the FTC, even with their native advertising, I mean, they’re not very comfortable in making editorial judgments on things. So I think there’s constitutional issues, practical issues, and there also is a requirement for a supermajority on the FTC. So that would mean that even a minority party could block Section 230 for the large platforms if they wanted to. So I think there are a lot of logistical issues, as well as the constitutional issues.

Danielle Citron: But it might be helpful to explain why it’s a constitutional issue. It goes back, Jim, to your point about, like, the First Amendment and who are the First Amendment actors. Whose speech would be controlled by government and compelled? It would be private actors. Platforms are speakers, too. They’re private actors. They’re not the government. So they’re not First Amendment actors that owe us freedom from regulation. They would argue, again, sort of channel Eugene Volokh, but they would argue that they’re speakers, the platforms, and it’s their speech, and that they should be free from government regulation. So I just thought it might be worth — this is the teacher in me, so just circling back to your —

James Pethokoukis: Well, that brings up what I call my woke Facebook scenario, which I think we may have mentioned. You know, one day, you know, Mark Zuckerberg, who at one point, you know, he was touring the country, that people want to know whether he was going to run for president. What if he says one day, “Listen, I’m very concerned about the direction of this country. So from now on, on Facebook, you can feel free to post about your family, funny pictures, what’s going on in your lives. And you can post about issues of racism and gender, but no Trump posts. We don’t want to hear posts about the Trump agenda. We don’t want to hear about that. We’re going to be woke Facebook.” Is that not a government issue at that point, with this massive platform, so influential in our lives, could change the debate? They may be a private actor, but they certainly have a tremendous public role.

Danielle Citron: They have tremendous power. But they’re not a public forum, and they’re certainly not a —

James Pethokoukis: So that will be okay, that woke Facebook.

Danielle Citron: Sure. Guess what, go to Gab, go somewhere else. Go to Twitter.

Jeff Kosseff: And there was a knitting web forum, an online knitting forum that made that very same choice, obviously Facebook’s —

James Pethokoukis: Only it’s somewhat less than three billion.

Jeff Kosseff: Yes, exactly.

James Pethokoukis: I think, I’m not sure.

Jeff Kosseff: I mean, I think that we had a recent Supreme Court decision a few months ago that Justice Kavanaugh wrote, which found that cable access station did not have the First Amendment requirements. So I think that really gives you an idea of the direction the Supreme Court is going with that.

James Pethokoukis: In fact, I was reading an interview with Sen. Hawley, and he said, I think there were a lot of misperceptions about what Section 230 called for, about — that, in fact, there wasn’t some grand bargain decades ago about how these would operate. And I think that times have changed. When this regulation was instituted, we didn’t have these big platforms. The internet was not as big. We did not have these business models, so now it’s sort of time for a change. And they’re tremendously influential in speech.

So I guess we’re getting to sort of the reform part of our conversation. But just to be clear, their role in political speech, even though that may have not been sort of the top-line issue back in 1996, that’s certainly covered in this idea that they can choose to be neutral or not neutral. It’s not just defamation. It’s not just pornography.

Jim Harper: Look, you know, Professor Citron said that Facebook is powerful, but I think you’d quickly understand the limits of Facebook’s power that Mark Zuckerberg declared it a no Trump zone or something like that.

Danielle Citron: A hundred percent. So I think people were like, “You’re kidding me?”

Jim Harper: Well, it would likely play into Trump’s hands to have this kid in Palo Alto trying to play presidential politics.

Danielle Citron: Totally, we’ll have an alternative social media site that would pop up pretty fast.

Jim Harper: They’re already out there waiting, you know.

Danielle Citron: Yeah, sure.

Jim Harper: So, I mean, Congress has these concerns, but I don’t think they’re real. The capacity to control the country by a marionette strings is long believed, but never very well demonstrated.

James Pethokoukis: I think that would certainly — and as I sort of discuss this issue whether on social media or elsewhere — certainly there are a lot of Republicans who are absolutely convinced that they see that there’s 20 Twitter accounts that are suspended by Republicans for everyone that might be on the other side. Have there been actual empirical studies or analyses looking at any of these issues?

Jeff Kosseff: There’s been a lot of anecdotal discussion for a variety of concerns about platforms, which — can we talk about potential solutions now? Okay. So this is probably the most sort of [inaudible] —

Danielle Citron: But should we just say that it’s anecdotes, and if you dig in a bit, it seems completely overblown and untrue that if you — you know, I can come up with 10 stories about, you know, Black Lives Matter protesters who are moved from Facebook and Twitter. So does that make sense? It’s, like, we all have our examples. We can try it out. I think, empirically speaking, it seems to be pretty false. But, you know, let’s, you know, I welcome the studies —

James Pethokoukis: The audience may have questions on this very issue. And we will reserve tine, which I should mention earlier, 15 or 20 minutes that people have, and I will even look all the way over on the other side here. They have questions, so if you want to sort of be mentally putting them together.

Again, to get to the issue, I’m just wondering if people grasp the difficulty of the content moderation. At the same time, I don’t want to let these companies off the hook that they’ve been doing a fantastic job with content moderation when it’s clear that is a work in progress at best.

Danielle Citron: I think it’s going to always be a work in progress, just given the scale and given the types of problems, given the challenges and pressures from outside the United States. You know, what I would like to see companies do though — and I think we’ve seen some of it, at least with the largest actors — is to be constantly thinking and iterating and experimenting, whether it’s addressing a deepfake problem by, you know, coming up with technical solutions, which is going to be very hard to do. But with new types of policies and thoughts that if there’s pressure on companies to do something and to act reasonably, then they will iterate; they will experiment. There’s no one solution.

James Pethokoukis: Do you think there’s a political space for that iteration? Because when they iterate badly, wrongly —

Danielle Citron: I do.

James Pethokoukis: — it seems like — at least right now that the response is ferocious, that they cannot get it right. And there’s not like, “Well, that was an interesting experiment that went wrong. Let’s see what they come up with next week.” I’m not sure that’s currently [inaudible].

Jim Harper: Jim, I want to challenge your premise a little bit that they’re not doing a good a job —

Danielle Citron: They’re doing it. Oh, yeah.

Jim Harper: — because we happily aren’t in a position to see what it would be like if they weren’t doing the job they’re doing. The articles you refer to earlier, what’s the publication, Vice?

Jeff Kosseff: Verge.

Jim Harper: Verge, read those articles about what the moderators are seeing and what they’re preventing you all from seeing, and you have a little bit of respect for what they are doing. It’s one of the things, the things we’re not seeing are really, really horrible —

Danielle Citron: A world without moderation is scary.

Jim Harper: — it might make you think that they’re doing a better job.

James Pethokoukis: Well, indeed, some people criticize these content moderation policies you say. You know, let a thousand, let a million, let a billion flowers bloom, you know. Let’s just put it up there, and people can decide for themselves, you know, who they want to follow or not follow.

Jeff Kosseff: I would throw my computers and devices out the window if that was the case.

Danielle Citron: Yeah, I wouldn’t use it.

Jeff Kosseff: Yeah, I mean, it would be terrible.

Jim Harper: I want to riff on that to talk about some future scenarios that we might have in mind now. No technology talk would be complete without blockchain.

Danielle Citron: Oh, stop. Every conference literally the answer is not blockchain. I’m just laying that down.

James Pethokoukis: This is not actionable investment advice.

Jim Harper: You know, there are future blockchain-based social networks where you can’t remove content. Somebody could post in, and it’s going to be there forever, there’s no corporation to ask to take it down, it’s distributed across the world, and it would be mathematically impossible to remove content. So if it’s coming, it might be coming, you might be aware that you could solve this problem right now, sprinkle your magic legislative dust on this problem, and the bad actors go right over to this other place where things are uncensorable. So that means challenges to the existence or utility of defamation law. That means that revenge porn once up is permanently up and can’t be taken down again.

So there’s a whole world out there that could emerge and can’t really be altered that I can imagine. So we should think about all the expectations and all the law that emerged in the past might be deeply challenged by future technologies.

James Pethokoukis: So 230 emerged in the past. I think people who say things are very different now, I think that’s a completely valid point. And I think, you know, looking at, you

know, what’s happening now, whether this new law needs be formed, perfectly valid. I don’t think anyone is an absolutist. But keeping in mind the downsides that you’ve mentioned of getting this sort of wrong, and when it’s been wildly successful, people say, “What can government do?” Well, this is something government did. It seemed to work out really well.

We don’t want to screw it up. So being very careful I think about reforms, what do those reforms look like? And we can just sort of go down the line right here. So how would you go about reforming 230? And feel free to comment on each other’s reforms.

Danielle Citron: And you can absolutely throw tomatoes at me, just figuratively not literally. How’s that? I think we should preserve Section 230, but it shouldn’t be a free pass. That is Section 230C1. We should make it conditional. That is, we’re not going to treat an interactive service, we’re not going to treat online service providers as publishers or speakers, if they engage in reasonable content moderation practices. And that’s not looking at any given decision; it is looking at their practices writ large.

The premise of 230 — and let’s go back to the original purpose — was to incentivize self- monitoring. We should go back to that purpose and condition the immunity on engaging in reasonable content moderation practices in the face of known illegality. And it’s a proposal that I have the language at the ready. Ben Wittes and I have offered it. I’ve been talking to a bunch of different Senate offices. Maybe they’ll be interested. There are —

James Pethokoukis: In practice, what would that do?

Danielle Citron: What it would do, it would mean that you get to enjoy Section 230 immunity, but it would not let the revenge pornographers off the hook, the sites that engage in no content moderation, knowing that there’s deepfake sex videos on the site. So if you do nothing in the face of known illegality and mischief, then you shouldn’t enjoy the immunity from liability. You haven’t earned it.

But I would say the Facebooks, the , they have extensive speech rules. They have extensive speech moderation practices with staff of, I think they’re like 30,000 people at Facebook at this point. They’re engaged in reasonable content moderation practices.

It would depend on the type of speech. Like if we’re talking about nonconsensual pornography, what’s reasonable is different from what’s reasonable for threats and defamation. Maybe a hash program that Facebook is increasingly engaged in is a really good idea. But hashing content and using AI for threats is impossible because threats, how we know it’s a threat, it’s contextual. Stalking is as well. Stalking is not one of those things that’s inherently illegal. You gotta look at the whole picture and context, and so what’s reasonable at one, content and approach, and what reasonableness would do is incentivize experimentation and moving with the times and with the technologies.

James Pethokoukis: And is there a moderation at scale issue with that proposal, though?

Danielle Citron: Yeah.

James Pethokoukis: I mean, it seems like it would be really hard.

Danielle Citron: It’s aimed at not any specific decision, vis-á-vis content, but at scale, the speech rules and practices and design. So it’s looking at a company’s approach to content moderation, the design of the site, how they writ large responses, certain types of speech and problems, known problems. And so the idea is to look at it: How are they doing it at scale? And what’s reasonable for Yelp, what’s reasonable for Facebook is certainly different from what’s reasonable from Jim’s — can we revive your blog for a second?

Jim Harper: Sure, washingtonwatch.com.

Danielle Citron: There we go.

Jim Harper: There’s nothing there.

Danielle Citron: And my old blog, Concurring Opinions, like what was reasonable for five law preps running Concurring Opinions 10 years ago, we have like 300 commenters on any given day, is different unless, you know, if we were the Volokh Conspiracy when it was doing its thing and had thousands of commentators, and there are like six people running it. So it would matter the size of the company, who was participating in it, what kind of activity they were hosting, reasonableness, the idea of the premise of it, its tailored to the circumstances.

James Pethokoukis: I think these are reasonable reforms.

Jeff Kosseff: So, well, it’s about reasonableness. But, yeah, like, I share the concern about some of the bad actors. I mean, there are some bad actors that we need to figure out how to solve the problem of them. But my concern about a reasonableness standard — and this comes from having practiced cybersecurity law, where a lot of the regulations just come down to reasonableness — is the lack of certainty as to what would be reasonable when ultimately there could be federal appellate judges who perhaps don’t necessarily understand the technology.

Danielle Citron: Could we ever have specific rules for cybersecurity? They would get outmoded in five seconds.

Jeff Kosseff: Well, but we do. We have NIST standards, and we have some state laws that have incorporated the NIST standards.

Danielle Citron: But they’re standards. They’re broad.

Jeff Kosseff: But they give some controls, and they give specific controls. And I think that without any certainty at all, just having reasonableness and when you’re conditioning other people’s speech on this reasonableness, that’s what I struggle with, with having just reasonableness for Section 230. I wonder how much of the free speech benefit there would be. And perhaps we don’t want that. But I do really worry about the lack of certainty if you just have reasonableness. I think in other areas of the law, there isn’t speech that’s dependent on it. But for this, there is. So that would be my concern with just having reasonableness.

Jim Harper: So reasonableness is really hard to administer. But it’s probably the only thing to do. The problem is that you’ve got this legislation in place that has this language. And

you’ve got a Congress that you can’t rely on to do the careful little cut-and-pruning. So my solution —

James Pethokoukis: Very skeptical, very cynical.

Jim Harper: Yeah, I’ve been here a couple years. My solution is to go back in time, not have Section 230, and just let the common law develop up to today. And you’d have the closest thing you can approximate to reasonableness.

Danielle Citron: Yeah, you’d have reasonableness. I mean, we’ve had negligence, right, Jim? I mean, in that world, reasonableness would be so litigated. We’d have negligence claims, right?

Jim Harper: I think we’d actually know better what was reasonable than we do now because we have a flat rule against liability, and it’s been mostly people getting booted out of court.

James Pethokoukis: You cannot go back in time.

Jim Harper: You can’t go back in time.

James Pethokoukis: Do not have that DeLorean.

Jim Harper: So we have this piece of legislation —

Danielle Citron: No time machine.

Jim Harper: I don’t trust Congress to change it. And as Professor Citron has pointed out, you do have courts hemming back on it in places, back to, you know, pure publisher type of protections on liability and not where a site is kind of in the business of procuring, trafficking, or whatever it may be. So, yes, reasonableness, but the immediate proposal I have is to keep 230 in place and let traditional opinions continue to work over the law.

James Pethokoukis: And let the companies continue to innovate and iterate.

Jim Harper: Absolutely.

James Pethokoukis: This often comes up — before we get to Jeff. Can’t we just give people, you know, let people just post on these sites, but, you know, let people handle the filters? And why can’t there be just more filters that, “Here’s the kind of content I like to see, and that’s all I will see”? That seems like a —

Jim Harper: I built that on washingtonwatch.com. You could blank out other users; you could blank out particular words.

James Pethokoukis: Why doesn’t that happen with these platforms today? Why isn’t there just a lot more tools available to people?

Jim Harper: I don’t know.

Danielle Citron: They have some block lists. You know, like Twitter enables at least APIs and block lists.

James Pethokoukis: It’s very crude.

Danielle Citron: Yeah, I know. I think they should do better on that.

James Pethokoukis: What about any reforms or initiative?

Jeff Kosseff: Yeah. So I have two. One is a fairly specific one, and one is broader. The specific one is I think a lot of the sites that I think we’re all concerned about I don’t think should be entitled Section 230 protections. The problem is Section 230 is often decided at a very early stage, so there’s no discovery to determine whether the platform has contributed to the development of the content, in which case it doesn’t receive Section 230 protections. So I’d like to see some sort of standard in cases where the judge believes there’s reasonable cause that the platform has contributed to the content to allow limited discovery on that issue. We do that for personal jurisdiction. I think that would be a way to go after the bad actors that never should be getting protected by Section 230, while still preserving protections for the sites that really are not contributing to the material. So that’s one thing.

The second is, I mean, I hear from people on all sides about various issues with 230, ranging from, there shouldn’t be any moderation, to there shouldn’t be any protection at all and they should be moderating everything. And this is all anecdotal. And it’s all, “I have this specific story.” We do not have good empirical evidence. And it is such an important issue. I mean, it affects the speech of everyone in the United States essentially and the ability to receive information.

And so I would like to see a congressionally created commission to really gather a good factual basis for this before even proposing specific solutions because we don’t have that right now. And I’m really worried when I hear all these different stories, which are really opposed to one another. And I don’t know whether any of the solutions will actually pass, but I would like to get a much better factual record before we make any decisions on what to do with platforms or 230.

James Pethokoukis: What is sort of, you know, upside-downside risk? You know, what is sort of the upside of smart legislation, and what is sort of the downside of getting it wrong? Again, you’ve seen, I don’t think there’s any debate that Section 230 has been one of the real, you know, legislative foundations of the entire internet economy, obviously handled with care. So what’s the upside of getting these reforms right? And what does the internet look like? And what are sort of the downside of just, “Boy, we got it wrong”? What does this all look like? So, a little speculation, whoever wants to take a crack at how it looks.

Danielle Citron: The upside is that if we crafted it well, I think if we condition the immunity on reasonable content moderation practices or deny really bad actors, the ability to enjoy that immunity, we’d have a lot more speech online. We would not have stalking, harassment and threats, and nonconsensual pornography, which drives people offline. And more often those people are women and people from marginalized communities. So there is an upside; there’s actually a speech upside.

James Pethokoukis: Any other upsides, or are there just downsides of getting it wrong? How could it be better?

Jim Harper: I see far more downsides than upsides to any changes. And because of the practical problem of getting through Congress, I just don’t trust Congress to do — we could craft something brilliant. The four of us appear with all the folks in the room and get together, and we could put together a perfect legislative history and send it down the street, and what we get back might be something radically different. So I would be very hesitant to open up 230 to Congress right now.

Jeff Kosseff: I mean, the decisions are being made by lawyers, I assume. There are a number of lawyers in this room. We’re all risk-averse people. And so decisions about moderation have to at least get cleared by the lawyers. And you see what happened after the sex trafficking amendment at Craigslist days before it was signed into law, got rid of its personals section because they said, you know, it’s just too much risk. And that’s going to — I mean, I think changes to Section 230 — and we might decide it to be —

James Pethokoukis: Were you surprised at that outcome?

Jeff Kosseff: No. I mean, I think —

Danielle Citron: We worked on it, with different offices, and it was depressing what came out of the sausage factory. You know, like, we tried. What comes out of that committee was depressing, and what was signed was depressing.

Jeff Kosseff: I testified in the House Judiciary Committee in support of a limited exception for intentionally facilitating sex trafficking. What ended up, it’s a standard that is, yeah, you really can’t comprehend it, so the lawyers basically will just be risk-averse. So I mean, I think that I’m definitely not opposed to Section 230 changes. I just think they need to be really, really carefully crafted and deliberate and not just sort of some legislative compromise that gets attached to an appropriations bill right before the December recess. I mean, that’s not what we want.

Jim Harper: That’s everything these days.

Jeff Kosseff: Yes, exactly.

Danielle Citron: So FOSTA and SESTA is Jim’s proof of concept. You know, like you’re worry, Jim, it’s FOSTA and SESTA.

Jim Harper: It’s been validated and ready to go.

Danielle Citron: No. I’m a Pollyanna, so I’m always engaged and happy to help offices. So I’m not going to give up on the enterprise. But that, FOSTA and SESTA is what you’re talking about.

James Pethokoukis: I’m going to ask one more question and then wanted to thank them for the great panel, but I’m going to ask questions whether you’re a Pollyanna, pessimist, be prepared. I think we’ll bring mics around.

So if 230 was eliminated, what does that world look like? Again, do we still have these platforms? Does it destroy their business models? Is it something radically different, or is it just a — what does that world look like if we did real harm in the effort of reforming, you know, to this provision?

Jeff Kosseff: So I don’t think we know for sure because for the entire modern internet, we’ve had Section 230. So I mean, there are a bunch of different outcomes. Platforms could take the approach that, “Okay, well, the common law and First Amendment give me more protections if I don’t do any moderation,” which I think most of us would say that’s not a great outcome. They could also become risk-averse and just say, “We’re not going to allow user content because that just exposes us to too much risk.” Or perhaps it will be somewhere in between. But I can’t say with any certainty what would happen because that would really be new territory for us.

James Pethokoukis: And perhaps because you would have thrown your computer out the window.

Jeff Kosseff: Yes, exactly.

James Pethokoukis: You’d just not know what was going on in the internet.

Jim Harper: Brent Skorup and Jennifer Huddleston at the Mercatus Center have a good article on the common law developments that they saw coming along when 230 was passed, a trend because publishing has changed, a trend toward treating platforms as conduits that aren’t and can’t be responsible for what appears on their sites. So that gives me confidence that the long-term outcome — I know it’s debatable, but it gives me confidence that the long- term outcome would probably be good. We’d get a good result where the economically efficient and free speech protective result would pertain.

But in the short term, there’d be a stampede against the platforms of litigation. If the repeal of 230 were to signal that the federal law is opposed to protections for liability, then that could actually undercut the proper common law development.

So, you know, I don’t know how you get there from here, but if you ask the question, the abstract, what happens if 230 goes away, I think the common law and First Amendment protections for platforms, which arrive at about the place where 230 has gotten, or perhaps with some narrow tweaks so that the worst examples of egregious behavior that’s supported by platforms, would be impinged upon and would go away, so.

James Pethokoukis: Thoughts, or shall we go to questions?

Danielle Citron: We can go to —

James Pethokoukis: We’ll go to questions. Fantastic. All right. If you have a question, raise your hand, and we will send the mic over to you. I will start with that table right there. Gentlemen, with the laptop. If you care to say where you’re from, feel free.

Q: Yeah, Carl Szabo with NetChoice. James, you asked about the value of Section 230. Mike Masnick actually has a really good paper that came out, $440 billion over the next decade is the value. You asked about content moderation. We actually at NetChoice have a

report coming out shortly that aggregates all the transparency reports from Facebook, Google, and Twitter. They removed in six months five billion accounts and posts, so there is active content moderation.

But I did want to hit on something that you brought up at the end. What happens if we get rid of Section 230 — and Section 230, Jeff, you pointed out in your book, goes to encourage content moderation. , it doesn’t really matter if they lose Section 230 because they don’t engage in content moderation. And, Danielle, you brought up Grindr, which seems to not engage in a lot of content moderation. Do you expect to see a rise in hate and offensive and terrorist content as platforms move away from the risky content moderation towards a — well, we achieved total immunity if we do nothing?

Danielle Citron: That’s just not true, though. They don’t risk it by engaging in content moderation. You’re saying if we change Section 230, is that —

Q: Yeah.

Danielle Citron: Oh, so sorry. I just wanted to make sure I understand the premise of the question. So we wipe out Section 230 —

Q: Correct.

Danielle Citron: — you’re saying, and then we wipe out Section 230, and we live in a world which , we’re afraid of being treated as publishers, and so we engage in no moderation and content moderation. And I think that’s in part what Quinta Jurecic and I worried about in a paper that we wrote about FOSTA and SESTA, that we would have either both bad — Scylla and Charybdis — of no moderation, or too much moderation. So I share your concern that we’re going to end up looking like either 8chan, it’s like the Wild West, or, you know, then it’s family-friendly, you know, then it’s just overly censored, just what we thought 25 whatever years ago .

James Pethokoukis: I’m jumping back in. I mean, do you think that a lot of these companies, they would just love to be out of this business? They would love to be out of the figuring out what’s hate speech, what’s not a hate speech.

Danielle Citron: Oh, yeah. They would like Europe off their back, but that’s not because of us. Does that makes sense? Like, the hate speech conversation, that is because of Europe, truly, that pressure is less coming from us.

Jeff Kosseff: I would also just say that they’d love to be out of this business, but they need to be in this business. This is a product and a service they’re making a lot of money off of. This is part of the job.

Jim Harper: It should be said at least once in this event that the users out there are a little bit responsible too when it comes to .

Danielle Citron: Totally.

Jim Harper: Whatever happened to “don’t believe everything you read,” and turn it off, and put it down.

James Pethokoukis: Another question. I mean, that side, you’re very quiet over there. I’ll come back to you. That table right there, right next to the mic. How convenient.

Q: My name is Roger Cochetti, and during the 1990s, I was the IBM executive responsible for worldwide internet policy, and thus had a front-row seat in the development of Section 230. There was an important piece of the puzzle the panel hasn’t discussed, and it leads me to think of what might be a blunderbuss way to deal with the issue.

One of the elements that went into the assumptions that we all made at that time was that the future would be very much like the past. And most of us had in mind, at the time 230 was approved, computer bulletin boards. That’s what we thought of its content, that how could a poor and someday most, you know — the thinking was that someday there would be 100 million people on the internet, and there would be 100,000 websites. And everyone really assumed that there would be Facebook in the future. In fact, there would be thousands of Facebooks, and none of them would have more than like 2 percent market share. And everyone assumed there would be a Google in the future. In fact, there would be hundreds of , and none of them would have more than 1 percent market share. That would be a distributed system like the internet itself is, distributed. So there was never any expectation of enormity that we see today —

James Pethokoukis: And your question, sir?

Q: And the question is perhaps that assumption that went into thinking because of that — it was assumed that the internet was going to be made up of thousands and thousands of small businesses that could never possibly monitor 100 million people posting on their bulletin boards. The question then is if we — and I realized this is a blunderbuss solution, but think about it for a moment. Suppose we went as far as to say, any network that has a billion users or market value of a trillion dollars is by definition a publisher, with the same rights and duty-to-care that the Northwest Current and The Georgetowner Today have when they post one ads where they have to look at it and make a judgment, and at that scale, you are a publisher and no gray area. You have the same responsibility that a call and radio program or a neighborhood newspaper has of duty-of-care when you reach that size.

Danielle Citron: Thank you.

James Pethokoukis: All right. Anyone who wants to jump in on that one?

Jim Harper: I might think too abstractly about most things, but I have a hard time thinking that justice changes. The terms of justice in the world change if the size of your network goes above 999,999,999. So I want the rules to be rooted in justice and not administrative stuff that keeps the lawyers working. Either you’re doing it right or you’re doing it wrong. There’s reasonable, and there’s unreasonable. And the law should hew as closely as possible to that, not you got to be reasonable once you reach a certain threshold of users or market capitalization or whatever the case may be. So that type of rule exists all over the place, and you see lots of small businesses staying small so they don’t have to provide health insurance to their employees. And I don’t want more of that, not in this area.

James Pethokoukis: All right. The gentleman right here. Right in front of me.

Q: Good morning. Thank you for doing this. I’m Jeff Jarvis from the Newmark School of Journalism at CUNY. And I’m also a member of a, pardon for the long title, Transatlantic Working Group on Content Moderation and Freedom of Expression organized by Susan Ness, sitting two people left to me. I learned a lot through the course, so I want to just throw two quick ideas out to get your response.

One, from that group that I’ve learned is maybe we should concentrate not on content, but on behaviors. You’re trying to get rid of behaviors, and that’s a different — that’s what law does. And there’s a suggestion coming from one member of the group that there ought to be national internet courts, where matters of legality should go to a court where we negotiate legal norms in public with due process. And in terms of the legal but irritating, which is what the UK wants to go after, now we have Facebook’s oversight board, where they’re trying to establish that. And we’ll see how that goes. So I wonder about your views of these kinds of efforts to, in essence, come up with new common law through different paths as we negotiate this very new landscape.

James Pethokoukis: Danielle, you’ve written about this quite a bit. So, yeah.

Danielle Citron: Yeah, in 2014, in my book, “Hate Crimes in Cyberspace,” I argued that a company should engage in what I call in a different venue, technological due process, that they should be transparent and accountable for their content moderation practices. That is, it could be as a matter of law, but that they should. And I’m not sure if I see — and maybe Jeff and Jim can join me on this — the difference. Conduct and speech, the line is really blurry. The difference between content and conduct, you know — yes, it’s true, I think we should be more careful to realize that some speech actually is tantamount to conduct in that stalking, harassment, workplace, sexual harassment, there are some moments where we can say, “You know, that’s truly like a bludgeon, an instrument, rather than, you know, conveying a message or a viewpoint.” But largely, it’s really hard. So I’m not sure if that would be — I feel like I have my speech friends with me here, you know, that would be, you know, conduct and speech are one and the same. I’m going to channel Katie McKinnon here a little bit. Like, a lot of conduct is expressive and vice versa.

Jim Harper: Yeah, but I don’t want to talk about that. I’m interested in the —

Danielle Citron: No, no. You talk about what you want to talk about. Yeah.

Jim Harper: — national internet court. Sounds like a thing you’ll get if you just wait around for a couple of years and the older judges move out and then younger judges move in. I don’t know about a specialized court for this. And I actually want to challenge you, as you mentioned, a second time now, the idea that companies should be providing due process and transparency. I’m a big, big fan of transparency in the governmental context —

Danielle Citron: And not because they’re state actors, but just that we could acquire them.

Jim Harper: But true transparency in your content moderation would be a huge gift to your adversary because over on 4chan — or is it 8chan now? I’m a fuddy-duddy. I think of 4chan. They would go and figure out how to game the rules you’ve just published and do the thing that falls right on the line between the two.

Danielle Citron: Yeah, but say, “You can’t game us.” That’s nonsense. No.

Jim Harper: It is not nonsense. They work hard. Smart people, unfortunate, socially maladaptive, smart people —

Danielle Citron: But platforms are smart too. You can’t game us.

Jim Harper: — working very hard to gain published rules. That’s a gift I think to them.

Danielle Citron: That’s law. So you’re saying we shouldn’t have law that’s published so we can —

Jim Harper: No, no. But it’s gameable, and you have to have laws that don’t have too many nooks and crannies because people will game them rather than just live right.

Jeff Kosseff: So I would say there has to be more transparency than there is right now. And we’re getting better. I think today compared to three or four years ago, there is much more transparency, but we have a long way to go.

And this is one thing that I really stress to the tech companies is that Section 230 is not a constitutional right. This is something that Congress has provided, and there are some people who will say Section 230 isn’t a benefit to the tech companies. And I think that’s nonsense. Of course, it’s a benefit to the tech companies. It’s a benefit to the public as well for free speech platforms. But tech companies do receive — I mean, they’ve been able to structure business models around this. And I have no problem with some sort of transparency basically being a premise of Section 230 because we need to have a better idea of what’s going on. We haven’t, and now you see what’s going on now with the debate, which is chaotic.

James Pethokoukis: Where’s the transparency? So when someone, an account gets suspended, it’s not a mystery. You’re like, “Oh, that makes sense. I get why that happened.” And the person can have that kind of expectation. If they engage in that behavior, they will understand that’s a likely result.

Jeff Kosseff: I think that and I think that could maybe help improve user behavior compared to where it is now, but also more transparency about the broader policies. I mean, the platforms have policies posted, but they’re fairly broad. And I think getting more specificity as to how the decisions are made, I see that as being a benefit overall.

James Pethokoukis: Shorter and clearer policies. Group over here, reluctant to ask a question, but then that one shot up. I’m sorry, I’m rudely pointing with my red pen right there.

Q: Elizabeth Banker, I’m with . I have a question about Professor Citron’s proposal. As many commentators have noted, one of the advantages of Section 230 is that it does allow companies that are sued to get rid of the cases fairly quickly through a motion to dismiss asserting the 230 immunity. It sounds like your proposal would require litigation before being able to get to that stage. And I’m just wondering about the impact on smaller companies. We did talk about how, you know, without Section 230, the bigger companies would have the resources to do more litigation. Well, smaller companies wouldn’t necessarily have the same resources. So is there a way that your idea kind of takes into account that smaller business impact?

Danielle Citron: So reasonableness itself would. What is reasonable is different from the small, but it doesn’t address your excellent point about the cost of litigation. What’s reasonable, we can adjudicate, and it’d be very different from what’s reasonable small versus big. And it’s true that there would be costs.

At the same time, we shouldn’t forget that there are costs of the current system, where there’s no expense, there is the dismissals, and what happens is we lose a lot. We lose a lot in terms of speech of victims who go offline. We have a lot of harm that is now, you know, externalize and born only by individuals and not by, you know, the revenge porn operators.

So I guess, what I would say is I recognize that that would be true. It would be more challenging and expensive for the small company to face litigation than it would Facebook. And even if it cashes out, that you have less onerous roles for small than big, that there would be the expense, of course. You might have more censorship creep and for the small versus the big because they can’t litigate, at the same time, some of those small actors are the folks running revenge porn sites. And so I think we can’t overlook that there are meaningful trade-offs, and it’s not all trade-offs in speech in one direction, which is just that poor provider. There’s also a lot of trade-offs for victims, who, without any change, they will be driven offline.

James Pethokoukis: Excellent. There’s another question over here. Maybe, gentleman, right directly.

Q: Hi, thank you. I just wanted to pry into something Jim brought up at the end that I think is a really important point, and I was wondering kind of what the panel would think about. You know, the question posed at today’s event is: Should we reform Section 230? And I think that there’s two halves to that. We have our academic question of, well, is there possible ways that 230 can be improved? And then there’s the other side, which is should we kick off that process here in DC and in Congress? And I think there seems to be, you know, complete agreement on the panel that 230 has been very beneficial, is very important to the internet that we all enjoy today.

So my fairly quick question is, you know, with Congress not necessarily being trusted with a lot of tech policy or even policy more generally, to get things through in a way that really gives us what we want, would you kick off that process? Would you take that risk that potentially Congress could undermine Section 330 faithfully just to get through potentially reforms that you see is beneficial?

Jeff Kosseff: I mean, we have Article 1, which pretty much says they have to — I mean, they’re the ones who are charged with it. And I think looking at it is one thing. I mean, one of the jobs of Congress is to look at the laws that they’ve passed. So I think I would agree there are some challenges that Congress has with understanding technology, and I’ll just be as broad as possible in saying that. But that’s not a reason to not look at laws. I think there are efforts to revive OTA, for example, which would help inform that, and is absolutely essential.

James Pethokoukis: The Office of Technology Assessment.

Jeff Kosseff: Yeah, which basically was Congress’ in-house technology think tank, and that sort of thing — so yeah, I think Congress has to look at laws. Whether they should do anything is something different. I’d like my commission to help inform that. But, yeah, I mean, we have no other choice.

Jim Harper: Your question I think helps show that we’re kind of talking about two different types of governmental bodies. Do we trust the legislature, or do we trust the judiciary more? The talk about common law is over on the judiciary side. It might be handled better there. No question that the world of litigation can be reformed quite a bit itself. So, you know, the costs that the earlier question referred to are serious and not to be forgotten.

Maybe familiarity breeds contempt, but I’ve worked in Congress, and I’ve been a Congress watcher for a lot of years, and I don’t think I would try to run anything through there. We’re much better off keeping 230 where it is and letting case by case, perhaps turn it back where appropriate.

James Pethokoukis: We have the time for only one more question that needs to be concise, powerful. So if you’re going to raise your hand, if you raise your hand, there’s going to be a huge — okay, sir. This is a biggie. Wait for the mic.

Jim Harper: Wait for the mic.

James Pethokoukis: Wait for the mic.

Jim Harper: Everybody needs to hear you.

Q: Will Duffield, Cato Institute. These private moderation systems that these platforms manage are essentially private legal systems. And in order for them to work to get buy-in, they need to be legitimate in someone’s eyes. In whose eyes should they attempt to legitimate themselves: users, states, the population at large? What renders these private governance systems legitimate or not?

Jim Harper: The market. Cato Institute, I had it teed right up.

Danielle Citron: I mean, not a crazy response.

James Pethokoukis: Do you not know the answer to that question?

Jim Harper: But the question allows me to put an important gloss on what I said about transparency. Transparency I do think would be a gift to people who are trying to game your system. But if transparency will give your users and your governmental overseers confidence that what you’re doing is right, then, okay, go ahead. But you’re going to have problems with your adversaries because of that transparency too.

James Pethokoukis: All right. I think that will have to be it. All-star panel, all-star audience. Thank you very much for coming.