The Supreme Court’s First Crack at Section 230(c)(1)

Ius & Iustitium welcomes submissions from academics, practicing lawyers, and students interested in the classical legal tradition. Adam Candeub is Professor of Law and Director of the Intellectual Property, Information & Communications Law Program at the Michigan State University College of Law.


The Supreme Court last week heard oral arguments Gonzalez v. Google—its first opportunity to consider Section 230(c)(1) of the Communications Decency Act, the statute that sets the basic liability rules for the internet. The Gonzalez plaintiffs represent victims of the Paris, Istanbul, and San Bernardino terrorist attacks. They claim that YouTube’s targeted recommendations radicalized the terrorists to commit their heinous crimes, and YouTube is liable for damages under the Anti-Terrorism Act (ATA), 18 U.S.C. § 2333. Google, YouTube’s parent company, argues Section 230(c)(1) shields it from that claim.[1]

A broad Supreme Court ruling in Gonzalez would allow platforms to continue to violate consumer fraudcontract, and even civil rights law with impunity, as I have argued herehere, and here. A broad ruling could cement faulty lower court ruling that ignore the provision’s text. Perhaps even more disturbing, a broad reading would preempt the Texas and Florida laws that aim to end platform viewpoint discrimination. Similarly, a broad ruling also protects the platforms from possible Bivens liability they might face for colluding with the government to censor speech in violation of the First Amendment—an issue the Twitter files have recently revealed. 

An overbroad ruling, therefore, would disrupt a vital balance in our constitutional structure: granting special legal protections to entities that already wield something approaching governmental power. What was perhaps most distressing about the oral argument was that several conservative justices seemed more interested in their potential rulings’ impact on Google’s bottom line than the institutions that maintain our Republic, such as free democratic discourse. It was Justices Ketanji Brown, Coney Barrett, Kagan, and Sotomayor who most pushed the lawyers to offer sound rules based on statutory text. 

If the Court rules on Section 230(c)(1)—rather than decide the case on the plaintiffs’ questionable liability theory that YouTube’s recommendations cause terrorism, the Court will likely rule in some way that furthers one of two views of the statute. On one hand, ignoring the statute’s text, libertarians and corporatists want the Court to ratify some lower courts’ extreme interpretation of Section 230(c)(1) that immunizes them for all of their editorial judgments. Congress intended the provision to help the fledgling dial-up internet service providers, such as AOL and Prodigy, create family-friendly online spaces free from obscene and violent images. Now, with the huge internet platforms taking their place, the libertarians and corporatists are happy with a permanently expanded liability protection. 

On the other hand, some courts, taking a textualist view, have ruled Section 230(c)(1) offers limited protection based on what the provision’s words provide: protecting platforms from causes of actions, such as libel, fraudulent representation, or criminal threat, in which being a publisher of user content is an element. It’s not a weird get-out-of-jail for free card for all their editorial decisions. The Court should, if it rules on the matter, make that basic point clear. 

The Legal Issues

The internet’s primary liability rule, Section 230(c)(1), mirroring traditional liability rules for telegraphs and telephones, protects internet platforms from causes of action with elements that treat platforms as publishers or speakers of their users’ statements. It simply states “No provider or user of an interactive computer service [e.g., Google] shall be treated as the publisher or speaker of any information provided by another” user. If you libel your friend on Facebook, Section 230(c)(1) protects Facebook, limiting your friend’s legal recourse to suing you—but if Facebook wrongs a user with its own speech or action, Section 230(c)(1) does not apply.

As the oral argument suggested, the Gonzalez case will turn on three issues. First, there is a narrow liability question whether YouTube’s “targeted recommendations,” i.e., the list of related videos that appear on your screen when you watch a YouTube video, are its own speech and, therefore, outside of Section 230(c)(1) protection—or whether targeted recommendations are simply reorganized user speech—thus “information provided by another,” as YouTube maintains, and protected by Section 230(c)(1).

Second, there is a broader question—what does Section 230(c)(1) cover and, more specifically, should the Court use this opportunity to bless a few of the court of appeals’ overly broad readings of section 230(c)(1) that contradict the statute’s text. As mentioned above, broad readings, cementing several lower court decisions, would allow the platforms to operate as unofficial government censors free from any effort to require fairness or even honesty when dealing with users. Similarly, a broad reading would cement platforms’ ability to host, encourage, or support all sorts of illegality ranging from state human trafficking crimes to facilitating rape and sexual harassment. While the narrow question is important for YouTube, the broad question, if the Court choses to answer it, will have enormous ramifications for the internet and will advance either the libertarian/corporatist or text-based view of Section 230.  

Third, there is the question of the Gonzalez plaintiffs’ underlying liability theory, which is strained if not facially absurd. They claim that had it not been but for YouTube’s video recommendations the Paris, Istanbul, and San Bernardino terrorists never would have become violent radicals. But, as the old saw goes, bad facts make bad law. 

The Narrow Question:  Targeted Recommendations

The question of whether targeted recommendations are YouTube’s own speech or simply algorithmically reorganized speech of its users is a fascinating, even delicious legal question. At what point does YouTube’s presentation of others’ speech rise to the level of its own expression—and ceases to be “information provided by” another becoming YouTube’s own.  Or, as Justice Ketanji Brown asked whether there is “really a difference between recommendations… and core 230 conduct [with immunity].”  Unfortunately, the lawyers were not helpful here, as the justices seemed genuinely confused about the issue.

For what it’s worth, I do have an answer in this law review article. It relies on the Court’s precedent for expressive action because algorithms are the platforms’ acts which they perform on users’ speech. Only when and if YouTube’s algorithms are intended to communicate a “particularized message” in a coherent verbal product that its viewers would likely understand, do those algorithms constitute YouTube’s speech.

Some algorithms are minimally expressive, such as Twitter’s original algorithms for posting tweets: arrange tweet in chronological order. But, some algorithms, as Elon Musk has revealed to the world in the Twitter files, are not so benign and undoubtedly might elevate one viewpoint over the other.  Viewers could reasonably understand or detect a message from their operation. Targeted recommendations seem closer to Twitter’s chronology algorithm. Like the Dewey decimal system or a librarian directing library users to relevant books, they don’t really express a viewpoint beyond “you might be interested in this.”

How can the court tell if the algorithms are content neutral or promote a particular viewpoint? There is nothing in this case’s record on how the algorithms work—but as the Twitter files show, there is a lot to learn about how they work. As Justice Kagan observed, she and her colleagues are not “the nine greatest experts on the Internet.”  The Supreme Court would be wise simply to remand this case for discovery to see how these algorithms work.

The Broad Question of Section 230(c)(1)’s Scope

This case’s implications extend beyond the protections afforded targeted recommendations. Also affected is the scope of section 230(c)(1).  Specifically, whether Section 230(c)(1) protects platforms’ decisions to collude with government to silence speech or discriminate against users based upon their viewpoints or even race or religion—as well as allowing the platforms’ to break promises and lie about their content moderation and promotion policies.

And, here, the text-based critique is the strongest. The provision only protects platforms from being sued in actions, such as for defamation, in which the act of publishing is an element of the cause of action, not from actions for fraud, discrimination, or aiding and abetting sexual assault in which being a publisher is not an element.  As Justice Ketanji Brown described the provision, “just because [libelous materials are] on your website, it doesn’t mean you’re going to be held automatically liable for it. And that’s (c)(1). . . . That seems to me to be a very narrow scope of immunity that doesn’t cover whether or not you’re making recommendations or promoting or doing anything else.”  

In contrast, the platforms, such as Google and Facebook, have consistently argued that Section 230(c)(1)  protects all “editorial functions” and decision-making for all content on their platforms.  Here, they argued that YouTube’s recommendations were its editorial function of transmitting “information provided by another” and, therefore, received Section 230(c)(1) protection. Although several courts of appeals have embraced this misinterpretation, at least one Supreme Court justice has strongly rejected it as have some lower courts. Further, reading Section 230(c)(1) to protect all of a platform’s editorial decisions, violates the rule of surplusage, rendering Section 230(c)(2) a nullity.

But, these textual questions did not seem to interest several of the conservative justices. Some justices feared that paring back the extravagant Section 230(c)(1) legal protection some lower courts have given the platforms would result in endless lawsuits. The genie is out of the bottle already—moderating billions of posts is impossible so we should give the platforms a pass, lest lawsuits swamp the federal judiciary. Better let Congress sort out this mess. 

This argument is truly disheartening to conservatives. The justices support the claim that interpreting Section 230 as written would hurt Google’s bottom line citing that long list of amici who filed Chicken Little briefs arguing that any revision of Google’s protection would break the internet. As Politico reported, a lot of these briefs are just Google-funded astro-turf.  More important, everyone knows the influence Google wields over the think thanks, foundations, and academe. They sing Google’s song, at least in part, because Google pays their bills. Determining this case’s factual predicate through reliance on D.C swamp astro-turf amici is hardly Equal Justice Under Law.

Further, the Court closed its eyes for twenty years to lower courts’ decisions that ignored Section 230(c)(1)’s text and intent. It cannot now, like Pontius Pilate, wash its hands and send the matter to Congress.  That body has proposed hundreds of Section 230 proposals over the last decade; none of them have gone anywhere, no doubt in part because of Google’s influence.  Section 230 allowed the platforms to dominate our information world and, indeed, our political system. It seems cowardly to punt the question of Section 230 to that same political system.  

Last, several justices asked about an argument put forth in the Internet Scholars Brief, a Supreme Court amicus brief representing a group of leading internet and copyright law professors, and the Computer & Communications Industry Association (CCIA) brief, which represents a group of trade groups.   Their argument is new—I have not seen it in over ten years of Section 230 litigation. Perhaps no one has argued this position because it ignores Section 230’s text, history—and, most important, omits the statute’s own definition of “publishing” found in 47 U.S.C. § 274. 

Their argument looks to the statutes’ definition of “access software provider” as “a provider of software . . .or enabling tools that do any one or more of the following: (A) filter, screen, allow, or disallow content; (B) pick, choose, analyze, or digest content; or (C) transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.” 47 U.S.C. § 230(f)(4). The briefs take this definition—which certainly covers targeted recommendations, and through a Rube Goldberg-esque argument loops it into Section 230(c)(1)’s definition of “publish.”  

As I have argued in the Yale Journal on Regulation blog, this is just a weak backdoor argument to the broad “editorial discretion” that the statute does not protect. Among this argument’s problems is that Section 230 of the Telecommunications Act already uses “publish” in a definition that contradicts the Internet Law Scholars’ and CCIA’s claim.  They failed to mention to the Court that the statute defines “electronic publishing.” It means the dissemination, provision, publication, or sale to an unaffiliated entity or person.” (emphasis added).  47 U.S.C. § 230(c)(1). Contrary to their claims, “publish” is not a broad common law term that loops in all sorts of terms related to dissemination as found in the definition of “access service provider.” It is a narrow, statutory term. 

While this might be a small—and concededly boring—issue of statutory construction, it seemed highly revealing about the way conservatives too often think. They always seem so eager to find jiggered ways to limit corporate liability. Concededly, that may be a good idea for us all; American tort regimes can be brutal. But, perhaps conservatives should react differently when that jiggering undermines the statute’s clear text. And, in particular, when that jiggering undermines institutional interests that redound to the common good—like requiring some accountability from the internet platforms that drive democratic deliberation today. 

The Underlying Liability Issue

Justice Coney Barrett suggested at oral argument that the whole Section 230 issue could go away if the underlying claim for which Google has used the Section 230(c)(1) defense is dismissed. The following day the Court examined the underlying claim in a different case, Taamneh—but both parties agreed that the Taamneh result would govern Gonzalez as well. And, the Taamneh oral argument did not go well for the plaintiffs. The Court just wasn’t buying it. It is therefore quite possible that the Section 230 issue will wait for another day.

Final Thoughts

The large internet platforms have transformed central institutions of American life, from disrupting the family and sexual norms to, as the Twitter files reveal, facilitating government power to control speech and stifle criticism. Conservatives should want to protect these institutions. Extending judge-made liabilities that protect the platforms would do the opposite—preventing the democratic process to protect those institutions. It’s painful to see some conservative judges jump through textual hoops and close their eyes to the statute Congress wrote in order to protect the platforms.

Adam Candeub


[1]  A companion test, Twitter v. Taamneh, argued the day after, presents the question of whether a similar theory of liability is tenable under the Justice Against Supporters of Terrorism Act (JASTA)—and if the Court rules the JASTA claim untenable, the ATA would fall as well.