Opinion
22-277
07-01-2024
Argued February 26, 2024
ON WRIT OF CERTIORARI TO THE UNITED STATES COURT OF APPEALS FOR THE ELEVENTH CIRCUIT, FIFTH CIRCUIT
In 2021, Florida and Texas enacted statutes regulating large social-media companies and other internet platforms. The States' laws differ in the entities they cover and the activities they limit. But both curtail the platforms' capacity to engage in content moderation-to filter, prioritize, and label the varied third-party messages, videos, and other content their users wish to post. Both laws also include individualized-explanation provisions, requiring a platform to give reasons to a user if it removes or alters her posts.
NetChoice LLC and the Computer &Communications Industry Association (collectively, NetChoice)-trade associations whose members include Facebook and YouTube-brought facial First Amendment challenges against the two laws. District courts in both States entered preliminary injunctions.
The Eleventh Circuit upheld the injunction of Florida's law, as to all provisions relevant here. The court held that the State's restrictions on content moderation trigger First Amendment scrutiny under this Court's cases protecting "editorial discretion." 34 F. 4th 1196, 1209, 1216. The court then concluded that the content-moderation provisions are unlikely to survive heightened scrutiny. Id., at 1227-1228. Similarly, the Eleventh Circuit thought the statute's individualized-explanation requirements likely to fall. Relying on Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626, the court held that the obligation to explain "millions of [decisions] per day" is "unduly burdensome and likely to chill platforms' protected speech." 34 F. 4th, at 1230.
The Fifth Circuit disagreed across the board, and so reversed the preliminary injunction of the Texas law. In that court's view, the platforms' content-moderation activities are "not speech" at all, and so do not implicate the First Amendment. 49 F. 4th 439, 466, 494. But even if those activities were expressive, the court determined the State could regulate them to advance its interest in "protecting a diversity of ideas." Id., at 482. The court further held that the statute's indi-vidualized-explanation provisions would likely survive, even assuming the platforms were engaged in speech. It found no undue burden under Zauderer because the platforms needed only to "scale up" a "com-plaint-and-appeal process" they already used. 49 F. 4th, at 487.
Held: The judgments are vacated, and the cases are remanded, because neither the Eleventh Circuit nor the Fifth Circuit conducted a proper analysis of the facial First Amendment challenges to Florida and Texas laws regulating large internet platforms. Pp. 9-31.
(a) NetChoice's decision to litigate these cases as facial challenges comes at a cost. The Court has made facial challenges hard to win. In the First Amendment context, a plaintiff must show that "a substantial number of [the law's] applications are unconstitutional, judged in relation to the statute's plainly legitimate sweep." Americans for Prosperity Foundation v. Bonta, 594 U.S. 595, 615.
So far in these cases, no one has paid much attention to that issue. Analysis and arguments below focused mainly on how the laws applied to the content-moderation practices that giant social-media platforms use on their best-known services to filter, alter, or label their users' posts, i.e., on how the laws applied to the likes of Facebook's News Feed and YouTube's homepage. They did not address the full range of activities the laws cover, and measure the constitutional against the unconstitutional applications.
The proper analysis begins with an assessment of the state laws' scope. The laws appear to apply beyond Facebook's News Feed and its ilk. But it's not clear to what extent, if at all, they affect social-media giants' other services, like direct messaging, or what they have to say about other platforms and functions. And before a court can do anything else with these facial challenges, it must "determine what [the law] covers." United States v. Hansen, 599 U.S. 762, 770.
The next order of business is to decide which of the laws' applications violate the First Amendment, and to measure them against the rest. For the content-moderation provisions, that means asking, as to every covered platform or function, whether there is an intrusion on protected
editorial discretion. And for the individualized-explanation provisions, it means asking, again as to each thing covered, whether the required disclosures unduly burden expression. See Zauderer, 471 U.S., at 651.
Because this is "a court of review, not of first view," Cutter v. Wilkinson, 544 U.S. 709, 718, n. 7, this Court cannot undertake the needed inquiries. And because neither the Eleventh nor the Fifth Circuit performed the facial analysis in the way described above, their decisions must be vacated and the cases remanded. Pp. 9-12.
(b) It is necessary to say more about how the First Amendment relates to the laws' content-moderation provisions, to ensure that the facial analysis proceeds on the right path in the courts below. That need is especially stark for the Fifth Circuit, whose decision rested on a serious misunderstanding of First Amendment precedent and principle. Pp. 12-29.
(1) The Court has repeatedly held that ordering a party to provide a forum for someone else's views implicates the First Amendment if, though only if, the regulated party is engaged in its own expressive activity, which the mandated access would alter or disrupt. First, in Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241, the Court held that a Florida law requiring a newspaper to give a political candidate a right to reply to critical coverage interfered with the newspaper's "exercise of editorial control and judgment." Id., at 243, 258. Florida could not, the Court explained, override the newspaper's decisions about the "content of the paper" and "[t]he choice of material to go into" it, because that would substitute "governmental regulation" for the "crucial process" of editorial choice. Id., at 258. The next case, Pacific Gas &Elec. Co. v. Public Util. Comm'n of Cal., 475 U.S. 1, involved California's attempt to force a private utility to include material from a certain consumer-advocacy group in its regular newsletter to consumers. The Court held that an interest in "offer[ing] the public a greater variety of views" could not justify compelling the utility "to carry speech with which it disagreed" and thus to "alter its own message." Id., at 11, n. 7, 12, 16. Then in Turner Broadcasting System, Inc. v. FCC, 512 U.S. 622, the Court considered federal "must-carry" rules, which required cable operators to allocate certain channels to local broadcast stations. The Court had no doubt the First Amendment was implicated, because the rules "interfere[d]" with the cable operators' "editorial discretion over which stations or programs to include in [their] repertoire." Id., at 636, 643-644. The capstone of this line of precedents, Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557, held that the First Amendment prevented Massachusetts from compelling parade organizers to admit as a participant a gay and lesbian group seeking to convey a
message of "pride." Id., at 561. It held that ordering the group's admittance would "alter the expressive content of the[ ] parade," and that the decision to exclude the group's message was the organizers' alone. Id., at 572-574.
From that slew of individual cases, three general points emerge. First, the First Amendment offers protection when an entity engaged in compiling and curating others' speech into an expressive product of its own is directed to accommodate messages it would prefer to exclude. Second, none of that changes just because a compiler includes most items and excludes just a few. It "is enough" for the compiler to exclude the handful of messages it most "disfavor[s]." Hurley, 515 U.S., at 574. Third, the government cannot get its way just by asserting an interest in better balancing the marketplace of ideas. In case after case, the Court has barred the government from forcing a private speaker to present views it wished to spurn in order to rejigger the expressive realm. Pp. 13-19.
(2) "[W]hatever the challenges of applying the Constitution to ever-advancing technology, the basic principles" of the First Amendment "do not vary." Brown v. Entertainment Merchants Assn., 564 U.S. 786, 790. And the principles elaborated in the above-summarized decisions establish that Texas is not likely to succeed in enforcing its law against the platforms' application of their content-moderation policies to their main feeds.
Facebook's News Feed and YouTube's homepage present users with a continually updating, personalized stream of other users' posts. The key to the scheme is prioritization of content, achieved through algorithms. The selection and ranking is most often based on a user's expressed interests and past activities, but it may also be based on other factors, including the platform's preferences. Facebook's Community Standards and YouTube's Community Guidelines detail the messages and videos that the platforms disfavor. The platforms write algorithms to implement those standards-for example, to prefer content deemed particularly trustworthy or to suppress content viewed as deceptive. Beyond ranking content, platforms may add labels, to give users additional context. And they also remove posts entirely that contain prohibited subjects or messages, such as pornography, hate speech, and misinformation on certain topics. The platforms thus unabashedly control the content that will appear to users.
Texas's law, though, limits their power to do so. Its central provision prohibits covered platforms from "censor[ing]" a "user's expression" based on the "viewpoint" it contains. Tex. Civ. Prac. &Rem. Code Ann. §143A.002(a)(2). The platforms thus cannot do any of the things they typically do (on their main feeds) to posts they disapprove-cannot demote, label, or remove them-whenever the action is based on the
post's viewpoint. That limitation profoundly alters the platforms' choices about the views they convey.
The Court has repeatedly held that type of regulation to interfere with protected speech. Like the editors, cable operators, and parade organizers this Court has previously considered, the major social-media platforms curate their feeds by combining "multifarious voices" to create a distinctive expressive offering. Hurley, 515 U.S., at 569. Their choices about which messages are appropriate give the feed a particular expressive quality and "constitute the exercise" of protected "editorial control." Tornillo, 418 U.S., at 258. And the Texas law targets those expressive choices by forcing the platforms to present and promote content on their feeds that they regard as objectionable.
That those platforms happily convey the lion's share of posts submitted to them makes no significant First Amendment difference. In Hurley, the Court held that the parade organizers' "lenient" admissions policy did "not forfeit" their right to reject the few messages they found harmful or offensive. 515 U.S., at 569. Similarly here, that Facebook and YouTube convey a mass of messages does not license Texas to prohibit them from deleting posts they disfavor. Pp. 19-26.
(3) The interest Texas relies on cannot sustain its law. In the usual First Amendment case, the Court must decide whether to apply strict or intermediate scrutiny. But here, Texas's law does not pass even the less stringent form of review. Under that standard, a law must further a "substantial governmental interest" that is "unrelated to the suppression of free expression." United States v. O'Brien, 391 U.S. 367, 377. Many possible interests relating to social media can meet that test. But Texas's asserted interest relates to the suppression of free expression, and it is not valid, let alone substantial.
Texas has never been shy, and always been consistent, about its interest: The objective is to correct the mix of viewpoints that major platforms present. But a State may not interfere with private actors' speech to advance its own vision of ideological balance. States (and their citizens) are of course right to want an expressive realm in which the public has access to a wide range of views. But the way the First Amendment achieves that goal is by preventing the government from "tilt[ing] public debate in a preferred direction," Sorrell v. IMS Health Inc., 564 U.S. 552, 578-579, not by licensing the government to stop private actors from speaking as they wish and preferring some views over others. A State cannot prohibit speech to rebalance the speech market. That unadorned interest is not "unrelated to the suppression of free expression." And Texas may not pursue it consistent with the First Amendment. Pp. 26-29.No. 22-277, 34 F. 4th 1196; No. 22-555, 49 F. 4th 439; vacated and remanded.
KAGAN, J., delivered the opinion of the Court, in which ROBERTS, C. J., and SOTOMAYOR, KAVANAUGH, and BARRETT, JJ., joined in full, and in which JACKSON, J., joined as to Parts I, II and III-A. BARRETT, J., filed a concurring opinion. JACKSON, J., filed an opinion concurring in part and concurring in the judgment. THOMAS, J., filed an opinion concurring in the judgment. ALITO, J., filed an opinion concurring in the judgment, in which THOMAS and GORSUCH, JJ., joined.
OPINION
KAGAN, JUSTICE [*]
Not even thirty years ago, this Court felt the need to explain to the opinion-reading public that the "Internet is an international network of interconnected computers." Reno v. American Civil Liberties Union, 521 U.S. 844, 849 (1997). Things have changed since then. At the time, only 40 million people used the internet. See id., at 850. Today, Facebook and YouTube alone have over two billion users each. See App. in No. 22-555, p. 67a. And the public likely no longer needs this Court to define the internet.
These years have brought a dizzying transformation in how people communicate, and with it a raft of public policy issues. Social-media platforms, as well as other websites, have gone from unheard-of to inescapable. They structure how we relate to family and friends, as well as to businesses, civic organizations, and governments. The novel services they offer make our lives better, and make them worse-create unparalleled opportunities and unprecedented dangers. The questions of whether, when, and how to regulate online entities, and in particular the social-media giants, are understandably on the front-burner of many legislatures and agencies. And those government actors will generally be better positioned than courts to respond to the emerging challenges social-media entities pose.
But courts still have a necessary role in protecting those entities' rights of speech, as courts have historically protected traditional media's rights. To the extent that social-media platforms create expressive products, they receive the First Amendment's protection. And although these cases are here in a preliminary posture, the current record suggests that some platforms, in at least some functions, are indeed engaged in expression. In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. They include and exclude, organize and prioritize-and in making millions of those decisions each day, produce their own distinctive compilations of expression. And while much about social media is new, the essence of that project is something this Court has seen before. Traditional publishers and editors also select and shape other parties' expression into their own curated speech products. And we have repeatedly held that laws curtailing their editorial choices must meet the First Amendment's requirements. The principle does not change because the curated compilation has gone from the physical to the virtual world. In the latter, as in the former, government efforts to alter an edited compilation of third-party expression are subject to judicial review for compliance with the First Amendment.
Today, we consider whether two state laws regulating social-media platforms and other websites facially violate the First Amendment. The laws, from Florida and Texas, restrict the ability of social-media platforms to control whether and how third-party posts are presented to other users. Or otherwise put, the laws limit the platforms' capacity to engage in content moderation-to filter, prioritize, and label the varied messages, videos, and other content their users wish to post. In addition, though far less addressed in this Court, the laws require a platform to provide an individualized explanation to a user if it removes or alters her posts. NetChoice, an internet trade association, challenged both laws on their face-as a whole, rather than as to particular applications. The cases come to us at an early stage, on review of preliminary injunctions. The Court of Appeals for the Eleventh Circuit upheld such an injunction, finding that the Florida law was not likely to survive First Amendment review. The Court of Appeals for the Fifth Circuit reversed a similar injunction, primarily reasoning that the Texas law does not regulate any speech and so does not implicate the First Amendment.
Today, we vacate both decisions for reasons separate from the First Amendment merits, because neither Court of Appeals properly considered the facial nature of NetChoice's challenge. The courts mainly addressed what the parties had focused on. And the parties mainly argued these cases as if the laws applied only to the curated feeds offered by the largest and most paradigmatic social-media platforms-as if, say, each case presented an as-applied challenge brought by Facebook protesting its loss of control over the content of its News Feed. But argument in this Court revealed that the laws might apply to, and differently affect, other kinds of websites and apps. In a facial challenge, that could well matter, even when the challenge is brought under the First Amendment. As explained below, the question in such a case is whether a law's unconstitutional applications are substantial compared to its constitutional ones. To make that judgment, a court must determine a law's full set of applications, evaluate which are constitutional and which are not, and compare the one to the other. Neither court performed that necessary inquiry.
To do that right, of course, a court must understand what kind of government actions the First Amendment prohibits. We therefore set out the relevant constitutional principles, and explain how one of the Courts of Appeals failed to follow them. Contrary to what the Fifth Circuit thought, the current record indicates that the Texas law does regulate speech when applied in the way the parties focused on be-low-when applied, that is, to prevent Facebook (or YouTube) from using its content-moderation standards to remove, alter, organize, prioritize, or disclaim posts in its News Feed (or homepage). The law then prevents exactly the kind of editorial judgments this Court has previously held to receive First Amendment protection. It prevents a platform from compiling the third-party speech it wants in the way it wants, and thus from offering the expressive product that most reflects its own views and priorities. Still more, the law-again, in that specific application-is unlikely to withstand First Amendment scrutiny. Texas has thus far justified the law as necessary to balance the mix of speech on Facebook's News Feed and similar platforms; and the record reflects that Texas officials passed it because they thought those feeds skewed against politically conservative voices. But this Court has many times held, in many contexts, that it is no job for government to decide what counts as the right balance of private expression-to "un-bias" what it thinks biased, rather than to leave such judgments to speakers and their audiences. That principle works for social-media platforms as it does for others.
In sum, there is much work to do below on both these cases, given the facial nature of NetChoice's challenges. But that work must be done consistent with the First Amendment, which does not go on leave when social media are involved.
I
As commonly understood, the term "social media platforms" typically refers to websites and mobile apps that allow users to upload content-messages, pictures, videos, and so on-to share with others. Those viewing the content can then react to it, comment on it, or share it themselves. The biggest social-media companies-entities like Facebook and YouTube-host a staggering amount of content. Facebook users, for example, share more than 100 billion messages every day. See App. in No. 22-555, at 67a. And YouTube sees more than 500 hours of video uploaded every minute. See ibid.
In the face of that deluge, the major platforms cull and organize uploaded posts in a variety of ways. A user does not see everything-even everything from the people she follows-in reverse-chronological order. The platforms will have removed some content entirely; ranked or otherwise prioritized what remains; and sometimes added warnings or labels. Of particular relevance here, Facebook and YouTube make some of those decisions in conformity with content-moderation policies they call Community Standards and Community Guidelines. Those rules list the subjects or messages the platform prohibits or discourages- say, pornography, hate speech, or misinformation on select topics. The rules thus lead Facebook and YouTube to remove, disfavor, or label various posts based on their content.
In 2021, Florida and Texas enacted statutes regulating internet platforms, including the large social-media companies just mentioned. The States' laws differ in the entities they cover and the activities they limit. But both contain content-moderation provisions, restricting covered platforms' choices about whether and how to display usergenerated content to the public. And both include individualized-explanation provisions, requiring platforms to give reasons for particular content-moderation choices.
Florida's law regulates "social media platforms," as defined expansively, that have annual gross revenue of over $100 million or more than 100 million monthly active users. Fla. Stat. §501.2041(1)(g) (2023). The statute restricts varied ways of "censor[ing]" or otherwise disfavoring posts- including deleting, altering, labeling, or deprioritizing them-based on their content or source. §501.2041(1)(b). For example, the law prohibits a platform from taking those actions against "a journalistic enterprise based on the content of its publication or broadcast." §501.2041(2)(j). Similarly, the law prevents deprioritizing posts by or about political candidates. See §501.2041(2)(h). And the law requires platforms to apply their content-moderation practices to users "in a consistent manner." §501.2041(2)(b).
The definition of "social-media platforms" covers "any information service, system, Internet search engine, or access software provider" that "[p]rovides or enables computer access by multiple users to a computer server, including an Internet platform or a social media site." Fla. Stat. §501.2041(1)(g)(1).
In addition, the Florida law mandates that a platform provide an explanation to a user any time it removes or alters any of her posts. See §501.2041(2)(d)(1). The requisite notice must be delivered within seven days, and contain both a "thorough rationale" for the action and an account of how the platform became aware of the targeted material. §501.2041(3).
The Texas law regulates any social-media platform, having over 50 million monthly active users, that allows its users "to communicate with other users for the primary purpose of posting information, comments, messages, or images." Tex. Bus. &Com. Code Ann. §§120.001(1), 120.002(b) (West Cum. Supp. 2023). With several exceptions, the statute prevents platforms from "censor[ing]" a user or a user's expression based on viewpoint. Tex. Civ. Prac. &Rem. Code Ann. §§143A.002(a), 143A.006 (West Cum. Supp. 2023). That ban on "censor[ing]" covers any action to "block, ban, remove, deplatform, demonetize, deboost, restrict, deny equal access or visibility to, or otherwise discriminate against expression." §143A.001(1). The statute also requires that "concurrently with the removal" of user content, the platform shall "notify the user" and "explain the reason the content was removed." §120.103(a)(1). The user gets a right of appeal, and the platform must address an appeal within 14 days. See §§120.103(a)(2), 120.104.
The statute further clarifies that it does not cover internet service providers, email providers, and any online service, website, or app consisting "primarily of news, sports, entertainment, or other information or content that is not user generated but is preselected by the provider." §120.001(1).
Soon after Florida and Texas enacted those statutes, NetChoice LLC and the Computer &Communications Industry Association (collectively, NetChoice)-trade associations whose members include Facebook and YouTube- brought facial First Amendment challenges against the two laws. District courts in both States entered preliminary injunctions, halting the laws' enforcement. See 546 F.Supp.3d 1082, 1096 (ND Fla. 2021); 573 F.Supp.3d 1092, 1117 (WD Tex. 2021). Each court held that the suit before it is likely to succeed because the statute infringes on the constitutionally protected "editorial judgment" of NetChoice's members about what material they will display. See 546 F.Supp.3d, at 1090; 573 F.Supp.3d, at 1107.
The Eleventh Circuit upheld the injunction of Florida's law, as to all provisions relevant here. The court held that the State's restrictions on content moderation trigger First Amendment scrutiny under this Court's cases protecting "editorial discretion." 34 F. 4th 1196, 1209, 1216 (2022). When a social-media platform "removes or deprioritizes a user or post," the court explained, it makes a "judgment rooted in the platform's own views about the sorts of content and viewpoints that are valuable and appropriate for dissemination." Id., at 1210. The court concluded that the content-moderation provisions are unlikely to survive "intermediate-let alone strict-scrutiny," because a State has no legitimate interest in counteracting "private 'censorship'" by "tilt[ing] public debate in a preferred direction." Id., at 1227-1228. Similarly, the Eleventh Circuit thought the statute's individualized-explanation requirements likely to fall. Applying the standard from Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626 (1985), the court held that the obligation to explain "millions of [decisions] per day" is "unduly burdensome and likely to chill platforms' protected speech." 34 F. 4th, at 1230.
The Fifth Circuit disagreed across the board, and so reversed the preliminary injunction before it. In that court's view, the platforms' content-moderation activities are "not speech" at all, and so do not implicate the First Amendment. 49 F. 4th 439, 466, 494 (2022). But even if those activities were expressive, the court continued, the State could regulate them to advance its interest in "protecting a diversity of ideas." Id., at 482 (emphasis deleted). The court further held that the statute's individualized-explanation provisions would likely survive, again even assuming that the platforms were engaged in speech. Those requirements, the court maintained, are not unduly burdensome under Zauderer because the platforms needed only to "scale up" a "complaint-and-appeal process" they already used. 49 F. 4th, at 487.
We granted certiorari to resolve the split between the Fifth and Eleventh Circuits. 600 U.S. ___(2023).
II
NetChoice chose to litigate these cases as facial challenges, and that decision comes at a cost. For a host of good reasons, courts usually handle constitutional claims case by case, not en masse. See Washington State Grange v. Washington State Republican Party, 552 U.S. 442, 450-451 (2008). "Claims of facial invalidity often rest on speculation" about the law's coverage and its future enforcement. Id., at 450. And "facial challenges threaten to short circuit the democratic process" by preventing duly enacted laws from being implemented in constitutional ways. Id., at 451. This Court has therefore made facial challenges hard to win.
That is true even when a facial suit is based on the First Amendment, although then a different standard applies. In other cases, a plaintiff cannot succeed on a facial challenge unless he "establish[es] that no set of circumstances exists under which the [law] would be valid," or he shows that the law lacks a "plainly legitimate sweep." United States v. Salerno, 481 U.S. 739, 745 (1987); Washington State Grange, 552 U.S., at 449. In First Amendment cases, however, this Court has lowered that very high bar. To "provide[] breathing room for free expression," we have substituted a less demanding though still rigorous standard. United States v. Hansen, 599 U.S. 762, 769 (2023). The question is whether "a substantial number of [the law's] applications are unconstitutional, judged in relation to the statute's plainly legitimate sweep." Americans for Prosperity Foundation v. Bonta, 594 U.S. 595, 615 (2021); see Hansen, 599 U.S., at 770 (likewise asking whether the law "prohibits a substantial amount of protected speech relative to its plainly legitimate sweep"). So in this singular context, even a law with "a plainly legitimate sweep" may be struck down in its entirety. But that is so only if the law's unconstitutional applications substantially outweigh its constitutional ones.
So far in these cases, no one has paid much attention to that issue. In the lower courts, NetChoice and the States alike treated the laws as having certain heartland applications, and mostly confined their battle to that terrain. More specifically, the focus was on how the laws applied to the content-moderation practices that giant social-media platforms use on their best-known services to filter, alter, or label their users' posts. Or more specifically still, the focus was on how the laws applied to Facebook's News Feed and YouTube's homepage. Reflecting the parties' arguments, the Eleventh and Fifth Circuits also mostly confined their analysis in that way. See 34 F. 4th, at 1210, 1213 (considering "platforms like Facebook, Twitter, YouTube, and Tik-Tok" and content moderation in "viewers' feeds"); 49 F. 4th, at 445, 460, 478, 492 (considering platforms "such as Facebook, Twitter, and YouTube" and referencing users' feeds); see also id., at 501 (Southwick, J., concurring in part and dissenting in part) (analyzing a curated feed). On their way to opposing conclusions, they concentrated on the same issue: whether a state law can regulate the content-moderation practices used in Facebook's News Feed (or near equivalents). They did not address the full range of activities the laws cover, and measure the constitutional against the unconstitutional applications. In short, they treated these cases more like as-applied claims than like facial ones.
The first step in the proper facial analysis is to assess the state laws' scope. What activities, by what actors, do the laws prohibit or otherwise regulate? The laws of course differ one from the other. But both, at least on their face, appear to apply beyond Facebook's News Feed and its ilk. Members of this Court asked some of the relevant questions at oral argument. Starting with Facebook and the other giants: To what extent, if at all, do the laws affect their other services, like direct messaging or events management? See Tr. of Oral Arg. in No. 22-555, pp. 62-63; Tr. of Oral Arg. in No. 22-277, pp. 24-25; App. in No. 22-277, pp. 129, 159. And beyond those social-media entities, what do the laws have to say, if anything, about how an email provider like Gmail filters incoming messages, how an online marketplace like Etsy displays customer reviews, how a payment service like Venmo manages friends' financial exchanges, or how a ride-sharing service like Uber runs? See Tr. of Oral Arg. in No. 22-277, at 74-79, 95-98; see also id., at 153 (Solicitor General) ("I have some sympathy [for the Court] here. In preparation for this argument, I've been working with my team to say, does this even cover direct messaging? Does this even cover Gmail?"). Those are examples only. The online world is variegated and complex, encompassing an ever-growing number of apps, services, functionalities, and methods for communication and connection. Each might (or might not) have to change because of the provisions, as to either content moderation or individualized explanation, in Florida's or Texas's law. Before a court can do anything else with these facial challenges, it must address that set of issues-in short, must "determine what [the law] covers." Hansen, 599 U.S., at 770.
The next order of business is to decide which of the laws' applications violate the First Amendment, and to measure them against the rest. For the content-moderation provisions, that means asking, as to every covered platform or function, whether there is an intrusion on protected editorial discretion. See infra, at 13-19. And for the individualized-explanation provisions, it means asking, again as to each thing covered, whether the required disclosures unduly burden expression. See Zauderer, 471 U.S., at 651. Even on a preliminary record, it is not hard to see how the answers might differ as between regulation of Facebook's News Feed (considered in the courts below) and, say, its direct messaging service (not so considered). Curating a feed and transmitting direct messages, one might think, involve different levels of editorial choice, so that the one creates an expressive product and the other does not.
If so, regulation of those diverse activities could well fall on different sides of the constitutional line. To decide the facial challenges here, the courts below must explore the laws' full range of applications-the constitutionally impermissible and permissible both-and compare the two sets. Maybe the parties treated the content-moderation choices reflected in Facebook's News Feed and YouTube's homepage as the laws' heartland applications because they are the principal things regulated, and should have just that weight in the facial analysis. Or maybe not: Maybe the parties' focus had all to do with litigation strategy, and there is a sphere of other applications-and constitutional ones- that would prevent the laws' facial invalidation.
The problem for this Court is that it cannot undertake the needed inquiries. "[W]e are a court of review, not of first view." Cutter v. Wilkinson, 544 U.S. 709, 718, n. 7 (2005). Neither the Eleventh Circuit nor the Fifth Circuit performed the facial analysis in the way just described. And even were we to ignore the value of other courts going first, we could not proceed very far. The parties have not briefed the critical issues here, and the record is underdeveloped. So we vacate the decisions below and remand these cases. That will enable the lower courts to consider the scope of the laws' applications, and weigh the unconstitutional as against the constitutional ones.
III
But it is necessary to say more about how the First Amendment relates to the laws' content-moderation provisions, to ensure that the facial analysis proceeds on the right path in the courts below. That need is especially stark for the Fifth Circuit. Recall that it held that the content choices the major platforms make for their main feeds are "not speech" at all, so States may regulate them free of the First Amendment's restraints. 49 F. 4th, at 494; see supra, at 8. And even if those activities were expressive, the court held, Texas's interest in better balancing the marketplace of ideas would satisfy First Amendment scrutiny. See 49 F. 4th, at 482. If we said nothing about those views, the court presumably would repeat them when it next considers NetChoice's challenge. It would thus find that significant applications of the Texas law-and so significant inputs into the appropriate facial analysis-raise no First Amendment difficulties. But that conclusion would rest on a serious misunderstanding of First Amendment precedent and principle. The Fifth Circuit was wrong in concluding that Texas's restrictions on the platforms' selection, ordering, and labeling of third-party posts do not interfere with expression. And the court was wrong to treat as valid Texas's interest in changing the content of the platforms' feeds. Explaining why that is so will prevent the Fifth Circuit from repeating its errors as to Facebook's and YouTube's main feeds. (And our analysis of Texas's law may also aid the Eleventh Circuit, which saw the First Amendment issues much as we do, when next considering NetChoice's facial challenge.) But a caveat: Nothing said here addresses any of the laws' other applications, which may or may not share the First Amendment problems described below.
Although the discussion below focuses on Texas's content-moderation provisions, it also bears on how the lower courts should address the individualized-explanation provisions in the upcoming facial inquiry. As noted, requirements of that kind violate the First Amendment if they unduly burden expressive activity. See Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626, 651 (1985); supra, at 11. So our explanation of why Facebook and YouTube are engaged in expression when they make content-moderation choices in their main feeds should inform the courts' further consideration of that issue.
A
Despite the relative novelty of the technology before us, the main problem in this case-and the inquiry it calls for- is not new. At bottom, Texas's law requires the platforms to carry and promote user speech that they would rather discard or downplay. The platforms object that the law thus forces them to alter the content of their expression-a particular edited compilation of third-party speech. See Brief for NetChoice in No. 22-555, pp. 18-34. That controversy sounds a familiar note. We have repeatedly faced the question whether ordering a party to provide a forum for someone else's views implicates the First Amendment. And we have repeatedly held that it does so if, though only if, the regulated party is engaged in its own expressive activity, which the mandated access would alter or disrupt. So too we have held, when applying that principle, that expressive activity includes presenting a curated compilation of speech originally created by others. A review of the relevant precedents will help resolve the question here.
The seminal case is Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974). There, a Florida law required a newspaper to give a political candidate a right to reply when it published "criticism and attacks on his record." Id., at 243. The Court held the law to violate the First Amendment because it interfered with the newspaper's "exercise of editorial control and judgment." Id., at 258. Forcing the paper to print what "it would not otherwise print," the Court explained, "intru[ded] into the function of editors." Id., at 256, 258. For that function was, first and foremost, to make decisions about the "content of the paper" and "[t]he choice of material to go into" it. Id., at 258. In protecting that right of editorial control, the Court recognized a possible downside. It noted the access advocates' view (similar to the States' view here) that "modern media empires" had gained ever greater capacity to "shape" and even "manipulate popular opinion." Id., at 249-250. And the Court expressed some sympathy with that diagnosis. See id., at 254. But the cure proposed, it concluded, collided with the First Amendment's antipathy to state manipulation of the speech market. Florida, the Court explained, could not substitute "governmental regulation" for the "crucial process" of editorial choice. Id., at 258.
Next up was Pacific Gas &Elec. Co. v. Public Util. Comm'n of Cal., 475 U.S. 1 (1986) (PG&E), which the Court thought to follow naturally from Tornillo. See 475 U.S., at 9-12 (plurality opinion); id., at 21 (Burger, C. J., concurring). A private utility in California regularly put a newsletter in its billing envelopes expressing its views of energy policy. The State directed it to include as well material from a consumer-advocacy group giving a different perspective. The utility objected, and the Court held again that the interest in "offer[ing] the public a greater variety of views" could not justify the regulation. Id., at 12. California was compelling the utility (as Florida had compelled a newspaper) "to carry speech with which it disagreed" and thus to "alter its own message." Id., at 11, n. 7, 16.
In Turner Broadcasting System, Inc. v. FCC, 512 U.S. 622 (1994) (Turner I), the Court further underscored the constitutional protection given to editorial choice. At issue were federal "must-carry" rules, requiring cable operators to allocate some of their channels to local broadcast stations. The Court had no doubt that the First Amendment was implicated, because the operators were engaging in expressive activity. They were, the Court explained, "exercising editorial discretion over which stations or programs to include in [their] repertoire." Id., at 636. And the rules "interfere[d]" with that discretion by forcing the operators to carry stations they would not otherwise have chosen. Id., at 643-644. In a later decision, the Court ruled that the regulation survived First Amendment review because it was necessary to prevent the demise of local broadcasting. See Turner Broadcasting System, Inc. v. FCC, 520 U.S. 180, 185, 189-190 (1997) (Turner II); see infra, at 28, n. 10. But for purposes of today's cases, the takeaway of Turner is this holding: A private party's collection of third-party content into a single speech product (the operators' "repertoire" of programming) is itself expressive, and intrusion into that activity must be specially justified under the First Amendment.
The capstone of those precedents came in Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557 (1995), when the Court considered (of all things) a parade. The question was whether Massachusetts could require the organizers of a St. Patrick's Day parade to admit as a participant a gay and lesbian group seeking to convey a message of "pride." Id., at 561. The Court held unanimously that the First Amendment precluded that compulsion. The "selection of contingents to make a parade," it explained, is entitled to First Amendment protection, no less than a newspaper's "presentation of an edited compilation of [other persons'] speech." Id., at 570 (citing Tornillo, 418 U.S., at 258). And that meant the State could not tell the parade organizers whom to include. Because "every participating unit affects the message," said the Court, ordering the group's admittance would "alter the expressive content of the[] parade." Hurley, 515 U.S., at 572573. The parade's organizers had "decided to exclude a message [they] did not like from the communication [they] chose to make," and that was their decision alone. Id., at 574.
On two other occasions, the Court distinguished Tornillo and its progeny for the flip-side reason-because in those cases the compelled access did not affect the complaining party's own expression. First, in PruneYard Shopping Center v. Robins, 447 U.S. 74 (1980), the Court rejected a shopping mall's First Amendment challenge to a California law requiring it to allow members of the public to distribute handbills on its property. The mall owner did not claim that he (or the mall) was engaged in any expressive activity. Indeed, as the PG&E Court later noted, he "did not even allege that he objected to the content of the pamphlets" passed out at the mall. 475 U.S., at 12. Similarly, in Rumsfeld v. Forum for Academic and Institutional Rights, Inc., 547 U.S. 47 (2006) (FAIR), the Court reiterated that a First Amendment claim will not succeed when the entity objecting to hosting third-party speech is not itself engaged in expression. The statute at issue required law schools to allow the military to participate in on-campus recruiting. The Court held that the schools had no First Amendment right to exclude the military based on its hiring policies, because the schools "are not speaking when they host interviews." Id., at 64. Or stated again, with reference to the just-described precedents: Because a "law school's recruiting services lack the expressive quality of a parade, a newsletter, or the editorial page of a newspaper," the required "accommodation of a military recruiter[]" did not "interfere with any message of the school." Ibid.
That is a slew of individual cases, so consider three general points to wrap up. Not coincidentally, they will figure in the upcoming discussion of the First Amendment problems the statutes at issue here likely present as to Facebook's News Feed and similar products.
First, the First Amendment offers protection when an entity engaging in expressive activity, including compiling and curating others' speech, is directed to accommodate messages it would prefer to exclude. "[T]he editorial function itself is an aspect of speech." Denver Area Ed. Telecommunications Consortium, Inc. v. FCC, 518 U.S. 727, 737 (1996) (plurality opinion). Or said just a bit differently: An entity "exercis[ing] editorial discretion in the selection and presentation" of content is "engage[d] in speech activity." Arkansas Ed. Television Comm'n v. Forbes, 523 U.S. 666, 674 (1998). And that is as true when the content comes from third parties as when it does not. (Again, think of a newspaper opinion page or, if you prefer, a parade.) Deciding on the third-party speech that will be included in or excluded from a compilation-and then organizing and presenting the included items-is expressive activity of its own. And that activity results in a distinctive expressive product. When the government interferes with such editorial choices-say, by ordering the excluded to be included- it alters the content of the compilation. (It creates a different opinion page or parade, bearing a different message.) And in so doing-in overriding a private party's expressive choices-the government confronts the First Amendment.
Of course, an entity engaged in expressive activity when performing one function may not be when carrying out another. That is one lesson of FAIR. The Court ruled as it did because the law schools' recruiting services were not engaged in expression. See 547 U.S. 47, 64 (2006). The case could not have been resolved on that ground if the regulation had affected what happened in law school classes instead.
Second, none of that changes just because a compiler includes most items and excludes just a few. That was the situation in Hurley. The St. Patrick's Day parade at issue there was "eclectic": It included a "wide variety of patriotic, commercial, political, moral, artistic, religious, athletic, public service, trade union, and eleemosynary themes, as well as conflicting messages." 515 U.S., at 562. Or otherwise said, the organizers were "rather lenient in admitting participants." Id., at 569. No matter. A "narrow, succinctly articulable message is not a condition of constitutional protection." Ibid. It "is enough" for a compiler to exclude the handful of messages it most "disfavor[s]." Id., at 574. Suppose, for example, that the newspaper in Tornillo had granted a right of reply to all but one candidate. It would have made no difference; the Florida statute still could not have altered the paper's policy. Indeed, that kind of focused editorial choice packs a peculiarly powerful expressive punch.
Third, the government cannot get its way just by asserting an interest in improving, or better balancing, the marketplace of ideas. Of course, it is critically important to have a well-functioning sphere of expression, in which citizens have access to information from many sources. That is the whole project of the First Amendment. And the government can take varied measures, like enforcing competition laws, to protect that access. Cf., e.g., Turner I, 512 U.S., at 647 (protecting local broadcasting); Hurley, 515 U.S., at 577 (discussing Turner I). But in case after case, the Court has barred the government from forcing a private speaker to present views it wished to spurn in order to rejigger the expressive realm. The regulations in Tornillo, PG&E, and Hurley all were thought to promote greater diversity of expression. See supra, at 14-16. They also were thought to counteract advantages some private parties possessed in controlling "enviable vehicle[s]" for speech. Hurley, 515 U.S., at 577. Indeed, the Tornillo Court devoted six pages of its opinion to recounting a critique of the then-current media environment-in particular, the disproportionate "influen[ce]" of a few speakers-similar to one heard today (except about different entities). 418 U.S., at 249; see id., at 248-254; supra, at 14-15. It made no difference. However imperfect the private marketplace of ideas, here was a worse proposal-the government itself deciding when speech was imbalanced, and then coercing speakers to provide more of some views or less of others.
B
"[W]hatever the challenges of applying the Constitution to ever-advancing technology, the basic principles" of the First Amendment "do not vary." Brown v. Entertainment Merchants Assn., 564 U.S. 786, 790 (2011). New communications media differ from old ones in a host of ways: No one thinks Facebook's News Feed much resembles an insert put in a billing envelope. And similarly, today's social media pose dangers not seen earlier: No one ever feared the effects of newspaper opinion pages on adolescents' mental health. But analogies to old media, even if imperfect, can be useful. And better still as guides to decision are settled principles about freedom of expression, including the ones just described. Those principles have served the Nation well over many years, even as one communications method has given way to another. And they have much to say about the laws at issue here. These cases, to be sure, are at an early stage; the record is incomplete even as to the major social-media platforms' main feeds, much less the other applications that must now be considered. See supra, at 12. But in reviewing the District Court's preliminary injunction, the Fifth Circuit got its likelihood-of-success finding wrong. Texas is not likely to succeed in enforcing its law against the platforms' application of their content-moderation policies to the feeds that were the focus of the proceedings below. And that is because of the core teaching elaborated in the above-summarized decisions: The government may not, in supposed pursuit of better expressive balance, alter a private speaker's own editorial choices about the mix of speech it wants to convey.
Most readers are likely familiar with Facebook's News Feed or YouTube's homepage; assuming so, feel free to skip this paragraph (and maybe a couple more). For the uninitiated, though, each of those feeds presents a user with a continually updating stream of other users' posts. For Facebook's News Feed, any user may upload a message, whether verbal or visual, with content running the gamut from "vacation pictures from friends" to "articles from local or national news outlets." App. in No. 22-555, at 139a. And whenever a user signs on, Facebook delivers a personalized collection of those stories. Similarly for YouTube. Its users upload all manner of videos. And any person opening the website or mobile app receives an individualized list of video recommendations.
The key to the scheme is prioritization of content, achieved through the use of algorithms. Of the billions of posts or videos (plus advertisements) that could wind up on a user's customized feed or recommendations list, only the tiniest fraction do. The selection and ranking is most often based on a user's expressed interests and past activities. But it may also be based on more general features of the communication or its creator. Facebook's Community Standards and YouTube's Community Guidelines detail the messages and videos that the platforms disfavor. The platforms write algorithms to implement those standards-for example, to prefer content deemed particularly trustworthy or to suppress content viewed as deceptive (like videos promoting "conspiracy theor[ies]"). Id., at 113a.
Beyond rankings lie labels. The platforms may attach "warning[s], disclaimers, or general commentary"-for example, informing users that certain content has "not been verified by official sources." Id., at 75a. Likewise, they may use "information panels" to give users "context on content relating to topics and news prone to misinformation, as well as context about who submitted the content." Id., at 114a. So, for example, YouTube identifies content submitted by state-supported media channels, including those funded by the Russian Government. See id., at 76a.
But sometimes, the platforms decide, providing more information is not enough; instead, removing a post is the right course. The platforms' content-moderation policies also say when that is so. Facebook's Standards, for example, proscribe posts-with exceptions for "news-worth[iness]" and other "public interest value"-in categories and subcategories including: Violence and Criminal Behavior (e.g., violence and incitement, coordinating harm and publicizing crime, fraud and deception); Safety (e.g., suicide and self-injury, sexual exploitation, bullying and harassment); Objectionable Content (e.g., hate speech, violent and graphic content); Integrity and Authenticity (e.g., false news, manipulated media). Id., at 412a-415a, 441a-442a. YouTube's Guidelines similarly target videos falling within categories like: hate speech, violent or graphic content, child safety, and misinformation (including about elections and vaccines). See id., at 430a-432a. The platforms thus unabashedly control the content that will appear to users, exercising authority to remove, label or demote messages they disfavor.
We therefore do not deal here with feeds whose algorithms respond solely to how users act online-giving them the content they appear to want, without any regard to independent content standards. See post, at 2 (BARRETT, J., concurring). Like them or loathe them, the Community Standards and Community Guidelines make a wealth of user-agnostic judgments about what kinds of speech, including what viewpoints, are not worthy of promotion. And those judgments show up in Facebook's and YouTube's main feeds.
Except that Texas's law limits their power to do so. As noted earlier, the law's central provision prohibits the large social-media platforms (and maybe other entities) from "censor[ing]" a "user's expression" based on its "viewpoint." §143A.002(a)(2); see supra, at 7. The law defines "expression" broadly, thus including pretty much anything that might be posted. See §143A.001(2). And it defines "censor" to mean "block, ban, remove, deplatform, demonetize, deboost, restrict, deny equal access or visibility to, or otherwise discriminate against expression." §143A.001(1).That is a long list of verbs, but it comes down to this: The platforms cannot do any of the things they typically do (on their main feeds) to posts they disapprove-cannot demote, label, or remove them-whenever the action is based on the post's viewpoint. And what does that "based on viewpoint" requirement entail? Doubtless some of the platforms' contentmoderation practices are based on characteristics of speech other than viewpoint (e.g., on subject matter). But if Texas's law is enforced, the platforms could not-as they in fact do now-disfavor posts because they:
The scope of the Texas law, a matter crucial to the facial inquiry, is unsettled, as previously discussed. See supra, at 10-11. The Texas solicitor general at oral argument stated that he understood the law to cover Facebook and YouTube, but "d[id]n't know" whether it also covered other platforms and applications. Tr. of Oral Arg. in No. 22-555, pp. 6162.
In addition to barring "censor[ship]" of "expression," the law bars "censor[ship]" of people. More specifically, it prohibits taking the designated "censor[ial]" actions against any "user" based on his "viewpoint," regardless of whether that "viewpoint is expressed on a social media platform." §§143A.002(a)(1), (b); see supra, at 7. Because the Fifth Circuit did not focus on that provision, instead confining its analysis to the law's ban on "censor[ing]" a "user's expression" on the platform, we do the same.
The Texas solicitor general explained at oral argument that the Texas law allows the platforms to remove "categories" of speech, so long as they are not based on viewpoint. See Tr. of Oral Arg. in No. 22-555, at 6970; §120.052 (Acceptable Use Policy). The example he gave was speech about Al-Qaeda. Under the law, a platform could remove all posts about Al-Qaeda, regardless of viewpoint. But it could not stop the "proAl-Qaeda" speech alone; it would have to stop the "anti-Al-Qaeda" speech too. Tr. of Oral Arg. in No. 22-555, at 70. So again, the law, as described by the solicitor general, prevents the platforms from disfavoring posts because they express one view of a subject.
• support Nazi ideology;
• advocate for terrorism;
• espouse racism, Islamophobia, or anti-Semitism;
• glorify rape or other gender-based violence;
• encourage teenage suicide and self-injury;
• discourage the use of vaccines;
• advise phony treatments for diseases;
• advance false claims of election fraud.
The list could continue for a while. The point of it is not that the speech environment created by Texas's law is worse than the ones to which the major platforms aspire on their main feeds. The point is just that Texas's law profoundly alters the platforms' choices about the views they will, and will not, convey.
Details on both the enumerated examples and similar ones are found in Facebook's Community Standards and YouTube's Community Guidelines. See https://transparency.meta.com/policies/community-standards; https://support.google.com/youtube/answer/9288567.
And we have time and again held that type of regulation to interfere with protected speech. Like the editors, cable operators, and parade organizers this Court has previously considered, the major social-media platforms are in the business, when curating their feeds, of combining "multifarious voices" to create a distinctive expressive offering. Hurley, 515 U.S., at 569. The individual messages may originate with third parties, but the larger offering is the platform's. It is the product of a wealth of choices about whether-and, if so, how-to convey posts having a certain content or viewpoint. Those choices rest on a set of beliefs about which messages are appropriate and which are not (or which are more appropriate and which less so). And in the aggregate they give the feed a particular expressive quality. Consider again an opinion page editor, as in Tornillo, who wants to publish a variety of views, but thinks some things off-limits (or, to change the facts, worth only a couple of column inches). "The choice of material," the "decisions made [as to] content," the "treatment of public is-sues"-"whether fair or unfair"-all these "constitute the exercise of editorial control and judgment." Tornillo, 418 U.S., at 258. For a paper, and for a platform too. And the Texas law (like Florida's earlier right-of-reply statute) targets those expressive choices-in particular, by forcing the major platforms to present and promote content on their feeds that they regard as objectionable.
That those platforms happily convey the lion's share of posts submitted to them makes no significant First Amendment difference. Contra, 49 F. 4th, at 459-461 (arguing otherwise). To begin with, Facebook and YouTube exclude (not to mention, label or demote) lots of content from their News Feed and homepage. The Community Standards and Community Guidelines set out in copious detail the varied kinds of speech the platforms want no truck with. And both platforms appear to put those manuals to work. In a single quarter of 2021, Facebook removed from its News Feed more than 25 million pieces of "hate speech content" and almost 9 million pieces of "bullying and harassment content." App. in No. 22-555, at 80a. Similarly, YouTube deleted in one quarter more than 6 million videos violating its Guidelines. See id., at 116a. And among those are the removals the Texas law targets. What is more, this Court has already rightly declined to focus on the ratio of rejected to accepted content. Recall that in Hurley, the parade organizers welcomed pretty much everyone, excluding only those who expressed a message of gay pride. See supra, at 18. The Court held that the organizers' "lenient" admissions policy-and their resulting failure to express a "particularized message"-did "not forfeit" their right to reject the few messages they found harmful or offensive. 515 U.S., at 569, 574. So too here, though the excluded viewpoints differ. That Facebook and YouTube convey a mass of messages does not license Texas to prohibit them from deleting posts with, say, "hate speech" based on "sexual orientation." App. in No. 22-555, at 126a, 155a; see id., at 431a. It is as much an editorial choice to convey all speech except in select categories as to convey only speech within them.
Similarly, the major social-media platforms do not lose their First Amendment protection just because no one will wrongly attribute to them the views in an individual post. Contra, 49 F. 4th, at 462 (arguing otherwise). For starters, users may well attribute to the platforms the messages that the posts convey in toto. Those messages-communicated by the feeds as a whole-derive largely from the platforms' editorial decisions about which posts to remove, label, or demote. And because that is so, the platforms may indeed "own" the overall speech environment. In any event, this Court has never hinged a compiler's First Amendment protection on the risk of misattribution. The Court did not think in Turner-and could not have thought in Tornillo or PG&E-that anyone would view the entity conveying the third-party speech at issue as endorsing its content. See Turner I, 512 U.S., at 655 ("[T]here appears little risk" of such misattribution). Yet all those entities, the Court held, were entitled to First Amendment protection for refusing to carry the speech. See supra, at 14-16. To be sure, the Court noted in PruneYard and FAIR, when denying such protection, that there was little prospect of misattribution. See 447 U.S., at 87; 547 U.S., at 65. But the key fact in those cases, as noted above, was that the host of the third-party speech was not itself engaged in expression. See supra, at 16-17. The current record suggests the opposite as to Facebook's News Feed and YouTube's homepage. When the platforms use their Standards and Guidelines to decide which third-party content those feeds will display, or how the display will be ordered and organized, they are making expressive choices. And because that is true, they receive First Amendment protection.
C
And once that much is decided, the interest Texas relies on cannot sustain its law. In the usual First Amendment case, we must decide whether to apply strict or intermediate scrutiny. But here we need not. Even assuming that the less stringent form of First Amendment review applies, Texas's law does not pass. Under that standard, a law must further a "substantial governmental interest" that is "unrelated to the suppression of free expression." United States v. O'Brien, 391 U.S. 367, 377 (1968). Many possible interests relating to social media can meet that test; nothing said here puts regulation of NetChoice's members off-limits as to a whole array of subjects. But the interest Texas has asserted cannot carry the day: It is very much related to the suppression of free expression, and it is not valid, let alone substantial.
Texas has never been shy, and always been consistent, about its interest: The objective is to correct the mix of speech that the major social-media platforms present. In this Court, Texas described its law as "respond[ing]" to the platforms' practice of "favoring certain viewpoints." Brief for Texas 7; see id., at 27 (explaining that the platforms' "discrimination" among messages "led to [the law's] enactment"). The large social-media platforms throw out (or encumber) certain messages; Texas wants them kept in (and free from encumbrances), because it thinks that would create a better speech balance. The current amalgam, the State explained in earlier briefing, was "skewed" to one side. 573 F.Supp.3d, at 1116. And that assessment mirrored the stated views of those who enacted the law, save that the latter had a bit more color. The law's main sponsor explained that the "West Coast oligarchs" who ran socialmedia companies were "silenc[ing] conservative viewpoints and ideas." Ibid. The Governor, in signing the legislation, echoed the point: The companies were fomenting a "dangerous movement" to "silence" conservatives. Id., at 1108; see id., at 1099 ("[S]ilencing conservative views is unAmerican, it's un-Texan and it's about to be illegal in Texas").
But a State may not interfere with private actors' speech to advance its own vision of ideological balance. States (and their citizens) are of course right to want an expressive realm in which the public has access to a wide range of views. That is, indeed, a fundamental aim of the First Amendment. But the way the First Amendment achieves that goal is by preventing the government from "tilt[ing] public debate in a preferred direction." Sorrell v. IMS Health Inc., 564 U.S. 552, 578-579 (2011). It is not by licensing the government to stop private actors from speaking as they wish and preferring some views over others. And that is so even when those actors possess "enviable ve-hicle[s]" for expression. Hurley, 515 U.S., at 577. In a better world, there would be fewer inequities in speech opportunities; and the government can take many steps to bring that world closer. But it cannot prohibit speech to improve or better balance the speech market. On the spectrum of dangers to free expression, there are few greater than allowing the government to change the speech of private actors in order to achieve its own conception of speech nirvana. That is why we have said in so many contexts that the government may not "restrict the speech of some elements of our society in order to enhance the relative voice of others." Buckley v. Valeo, 424 U.S. 1, 48-49 (1976) (per curiam). That unadorned interest is not "unrelated to the suppression of free expression," and the government may not pursue it consistent with the First Amendment.
The Court's decisions about editorial control, as discussed earlier, make that point repeatedly. See supra, at 18-19. Again, the question those cases had in common was whether the government could force a private speaker, including a compiler and curator of third-party speech, to convey views it disapproved. And in most of those cases, the government defended its regulation as yielding greater balance in the marketplace of ideas. But the Court-in Tornillo, in PG&E, and again in Hurley-held that such an interest could not support the government's effort to alter the speaker's own expression. "Our cases establish," the PG&E Court wrote, "that the State cannot advance some points of view by burdening the expression of others." 475 U.S., at 20. So the newspaper, the public utility, the parade organizer-whether acting "fair[ly] or unfair[ly]"- could exclude the unwanted message, free from government interference. Tornillo, 418 U.S., at 258; see United States Telecom Assn. v. FCC, 855 F.3d 381, 432 (CADC 2017) (Kavanaugh, J., dissenting from denial of rehearing en banc) ("[E]xcept in rare circumstances, the First Amendment does not allow the Government to regulate the content choices of private editors just so that the Government may enhance certain voices and alter the content available to the citizenry").
Texas claims Turner as a counter-example, but that decision offers no help to speak of. Turner did indeed hold that the FCC's must-carry provisions, requiring cable operators to give some of their channel space to local broadcast stations, passed First Amendment muster. See supra, at 15. But the interest there advanced was not to balance expressive content; rather, the interest was to save the local-broadcast industry, so that it could continue to serve households without cable. That interest, the Court explained, was "unrelated to the content of expression" disseminated by either cable or broadcast speakers. Turner I, 512 U.S. 622, 647 (1994). And later, the Hurley Court again noted the difference. It understood the Government interest in Turner as one relating to competition policy: The FCC needed to limit the cable operators' "monopolistic," gatekeeping position "in order to allow for the survival of broadcasters." 515 U.S., at 577. Unlike in regulating the parade-or here in regulating Facebook's News Feed or YouTube's homepage-the Government's interest was "not the alteration of speech." Ibid. And when that is so, the prospects of permissible regulation are entirely different.
The case here is no different. The interest Texas asserts is in changing the balance of speech on the major platforms' feeds, so that messages now excluded will be included. To describe that interest, the State borrows language from this Court's First Amendment cases, maintaining that it is preventing "viewpoint discrimination." Brief for Texas 19; see supra, at 26-27. But the Court uses that language to say what governments cannot do: They cannot prohibit private actors from expressing certain views. When Texas uses that language, it is to say what private actors cannot do: They cannot decide for themselves what views to convey. The innocent-sounding phrase does not redeem the prohibited goal. The reason Texas is regulating the contentmoderation policies that the major platforms use for their feeds is to change the speech that will be displayed there. Texas does not like the way those platforms are selecting and moderating content, and wants them to create a different expressive product, communicating different values and priorities. But under the First Amendment, that is a preference Texas may not impose.
IV
These are facial challenges, and that matters. To succeed on its First Amendment claim, NetChoice must show that the law at issue (whether from Texas or from Florida) "prohibits a substantial amount of protected speech relative to its plainly legitimate sweep." Hansen, 599 U.S., at 770. None of the parties below focused on that issue; nor did the Fifth or Eleventh Circuits. But that choice, unanimous as it has been, cannot now control. Even in the First Amendment context, facial challenges are disfavored, and neither parties nor courts can disregard the requisite inquiry into how a law works in all of its applications. So on remand, each court must evaluate the full scope of the law's coverage. It must then decide which of the law's applications are constitutionally permissible and which are not, and finally weigh the one against the other. The need for NetChoice to carry its burden on those issues is the price of its decision to challenge the laws as a whole.
But there has been enough litigation already to know that the Fifth Circuit, if it stayed the course, would get wrong at least one significant input into the facial analysis. The parties treated Facebook's News Feed and YouTube's homepage as the heartland applications of the Texas law. At least on the current record, the editorial judgments influencing the content of those feeds are, contrary to the Fifth Circuit's view, protected expressive activity. And Texas may not interfere with those judgments simply because it would prefer a different mix of messages. How that matters for the requisite facial analysis is for the Fifth Circuit to decide. But it should conduct that analysis in keeping with two First Amendment precepts. First, presenting a curated and "edited compilation of [third party] speech" is itself protected speech. Hurley, 515 U.S., at 570. And second, a State "cannot advance some points of view by burdening the expression of others." PG&E, 475 U.S., at 20. To give government that power is to enable it to control the expression of ideas, promoting those it favors and suppressing those it does not. And that is what the First Amendment protects all of us from.
We accordingly vacate the judgments of the Courts of Appeals for the Fifth and Eleventh Circuits and remand the cases for further proceedings consistent with this opinion.
It is so ordered.
JUSTICE BARRETT, concurring.
I join the Court's opinion, which correctly articulates and applies our First Amendment precedent. In this respect, the Eleventh Circuit's understanding of the First Amendment's protection of editorial discretion was generally correct; the Fifth Circuit's was not.
But for the reasons the Court gives, these cases illustrate the dangers of bringing a facial challenge. If NetChoice's members are concerned about preserving their editorial discretion with respect to the services on which they have focused throughout this litigation-e.g., Facebook's Newsfeed and YouTube's homepage-they would be better served by bringing a First Amendment challenge as applied to those functions. Analyzing how the First Amendment bears on those functions is complicated enough without simultaneously analyzing how it bears on a platform's other func-tions-e.g., Facebook Messenger and Google Search-much less to distinct platforms like Uber and Etsy. In fact, dealing with a broad swath of varied platforms and functions in a facial challenge strikes me as a daunting, if not impossible, task. A function qualifies for First Amendment protection only if it is inherently expressive. Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557, 568 (1995). Even for a prototypical socialmedia feed, making this determination involves more than meets the eye.
Consider, for instance, how platforms use algorithms to prioritize and remove content on their feeds. Assume that human beings decide to remove posts promoting a particular political candidate or advocating some position on a public-health issue. If they create an algorithm to help them identify and delete that content, the First Amendment protects their exercise of editorial judgment-even if the algorithm does most of the deleting without a person in the loop. In that event, the algorithm would simply implement human beings' inherently expressive choice "to exclude a message [they] did not like from" their speech compilation. Id., at 574.
But what if a platform's algorithm just presents automatically to each user whatever the algorithm thinks the user will like-e.g., content similar to posts with which the user previously engaged? See ante, at 22, n. 5. The First Amendment implications of the Florida and Texas laws might be different for that kind of algorithm. And what about AI, which is rapidly evolving? What if a platform's owners hand the reins to an AI tool and ask it simply to remove "hateful" content? If the AI relies on large language models to determine what is "hateful" and should be removed, has a human being with First Amendment rights made an inherently expressive "choice . . . not to propound a particular point of view"? Hurley, 515 U.S., at 575. In other words, technology may attenuate the connection between content-moderation actions (e.g., removing posts) and human beings' constitutionally protected right to "decide for [themselves] the ideas and beliefs deserving of expression, consideration, and adherence." Turner Broadcasting System, Inc. v. FCC, 512 U.S. 622, 641 (1994) (emphasis added). So the way platforms use this sort of technology might have constitutional significance.
There can be other complexities too. For example, the corporate structure and ownership of some platforms may be relevant to the constitutional analysis. A speaker's right to "decide 'what not to say'" is "enjoyed by business corporations generally." Hurley, 515 U.S., at 573-574 (quoting Pacific Gas &Elec. Co. v. Public Util. Comm'n of Cal., 475 U.S. 1, 16 (1986)). Corporations, which are composed of human beings with First Amendment rights, possess First Amendment rights themselves. See Citizens United v. Federal Election Comm'n, 558 U.S. 310, 365 (2010); cf. Burwell v. Hobby Lobby Stores, Inc., 573 U.S. 682, 706-707 (2014). But foreign persons and corporations located abroad do not. Agency for Int'l Development v. Alliance for Open Society Int'l, Inc., 591 U.S. 430, 433-436 (2020). So a social-media platform's foreign ownership and control over its contentmoderation decisions might affect whether laws overriding those decisions trigger First Amendment scrutiny. What if the platform's corporate leadership abroad makes the policy decisions about the viewpoints and content the platform will disseminate? Would it matter that the corporation employs Americans to develop and implement contentmoderation algorithms if they do so at the direction of foreign executives? Courts may need to confront such questions when applying the First Amendment to certain platforms.
These are just a few examples of questions that might arise in litigation that more thoroughly exposes the relevant facts about particular social-media platforms and functions. The answers in any given case might cast doubt on-or might vindicate-a social-media company's invocation of its First Amendment rights. Regardless, the analysis is bound to be fact intensive, and it will surely vary from function to function and platform to platform. And in a facial challenge, answering all of those questions isn't even the end of the story: The court must then find a way to measure the unconstitutional relative to the constitutional applications to determine whether the law "prohibits a substantial amount of protected speech relative to its plainly legitimate sweep." United States v. Hansen, 599 U.S. 762, 770 (2023) (internal quotation marks omitted).
A facial challenge to either of these laws likely forces a court to bite off more than it can chew. An as-applied challenge, by contrast, would enable courts to home in on whether and how specific functions-like feeds versus direct messaging-are inherently expressive and answer platform- and function-specific questions that might bear on the First Amendment analysis. While the governing constitutional principles are straightforward, applying them in one fell swoop to the entire social-media universe is not.
JUSTICE JACKSON, concurring in part and concurring in the judgment.
These cases present a complex clash between two novel state laws and the alleged First Amendment rights of several of the largest social media platforms. Some things are already clear. Not every potential action taken by a social media company will qualify as expression protected under the First Amendment. But not every hypothesized regulation of such a company's operations will necessarily be able to withstand the force of the First Amendment's protections either. Beyond those broadest of statements, it is difficult to say much more at this time. With these records and lower court decisions, we are not able to adequately evaluate whether the challenged state laws are facially valid.
That is in no small part because, as all Members of the Court acknowledge, plaintiffs bringing a facial challenge must clear a high bar. See ante, at 9-10 (majority opinion); post, at 13-14 (ALITO, J., concurring in judgment). The Eleventh Circuit failed to appreciate the nature of this challenge, and the Fifth Circuit did not adequately evaluate it. That said, I agree with JUSTICE BARRETT that the Eleventh Circuit at least fairly stated our First Amendment precedent, whereas the Fifth Circuit did not. See ante, at 1 (concurring opinion); see also ante, at 13-19 (majority opinion). On remand, then, both courts will have to undertake their legal analyses anew.
In doing so, the lower courts must address these cases at the right level of specificity. The question is not whether an entire category of corporations (like social media companies) or a particular entity (like Facebook) is generally engaged in expression. Nor is it enough to say that a given activity (say, content moderation) for a particular service (the News Feed, for example) seems roughly analogous to a more familiar example from our precedent. Cf. Red Lion Broadcasting Co. v. FCC, 395 U.S. 367, 386 (1969) (positing that "differences in the characteristics of new media justify differences in the First Amendment standards applied to them"). Even when evaluating a broad facial challenge, courts must make sure they carefully parse not only what entities are regulated, but how the regulated activities actually function before deciding if the activity in question constitutes expression and therefore comes within the First Amendment's ambit. See Brief for Knight First Amendment Institute at Columbia University as Amicus Curiae 11-12. Thus, further factual development may be necessary before either of today's challenges can be fully and fairly addressed.
In light of the high bar for facial challenges and the state of these cases as they come to us, I would not go on to treat either like an as-applied challenge and preview our potential ruling on the merits. Faced with difficult constitutional issues arising in new contexts on undeveloped records, this Court should strive to avoid deciding more than is necessary. See Ashwander v. TVA, 297 U.S. 288, 346-347 (1936) (Brandeis, J., concurring). In my view, such restraint is warranted today.
JUSTICE THOMAS, concurring in the judgment.
I agree with the Court's decision to vacate and remand because NetChoice and the Computer and Communications Industry Association (together, the trade associations) have not established that Texas's H. B. 20 and Florida's S. B. 7072 are facially unconstitutional.
I cannot agree, however, with the Court's decision to opine on certain applications of those statutes. The Court's discussion is unnecessary to its holding. See Jama v. Immigration and Customs Enforcement, 543 U.S. 335, 351, n. 12 (2005) ("Dictum settles nothing, even in the court that utters it"). Moreover, the Court engages in the exact type of analysis that it chastises the Courts of Appeals for performing. It faults the Courts of Appeals for focusing on only one subset of applications, rather than determining whether each statute's "full range of applications" are constitutional. See ante, at 10, 12. But, the Court repeats that very same error. Out of the sea of "variegated and complex" functions that platforms perform, ante, at 11, the Court plucks out two (Facebook's News Feed and YouTube's homepage), and declares that they may be protected by the First Amendment. See ante, at 26 (opining on what the "current record suggests"). The Court does so on a record that it itself describes as "incomplete" and "underdeveloped," ante, at 12, 20, and by sidestepping several pressing factual and legal questions, see post, at 29-32 (ALITO, J., concurring in judgment). As JUSTICE ALITO explains, the Court's approach is both unwarranted and mistaken. See ibid.
I agree with JUSTICE ALITO's analysis and join his opinion in full. I write separately to add two observations on the merits and to highlight a more fundamental jurisdictional problem. The trade associations have brought facial challenges alleging that H. B. 20 and S. B. 7072 are unconstitutional in many or all of their applications. But, Article III of the Constitution permits federal courts to exercise judicial power only over "Cases" and "Controversies." Accordingly, federal courts can decide whether a statute is constitutional only as applied to the parties before them- they lack authority to deem a statute "facially" unconstitutional.
I
As JUSTICE ALITO explains, the trade associations have failed to provide many of the basic facts necessary to evaluate their challenges to H. B. 20 and S. B. 7072. See post, at 22-29. I make two additional observations.
First, with respect to certain provisions of H. B. 20 and S. B. 7072, the Court assumes that the framework outlined in Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626 (1985), applies. See ante, at 11. In that case, the Court held that laws requiring the disclosure of factual information in commercial advertising may satisfy the First Amendment if the disclosures are "reasonably related" to the Government's interest in preventing consumer deception. 471 U.S., at 651. Because the trade associations did not contest Zauderer's applicability before the Eleventh Circuit and both lower courts applied its framework, I agree with the Court's decision to rely upon Zauderer at this stage. However, I think we should reconsider Zauderer and its progeny. "I am skeptical of the premise on which Zauderer rests-that, in the commercialspeech context, the First Amendment interests implicated by disclosure requirements are substantially weaker than those at stake when speech is actually suppressed." Mila-vetz, Gallop &Milavetz, P. A. v. United States, 559 U.S. 229, 255 (2010) (THOMAS, J., concurring in part and concurring in judgment) (internal quotation marks omitted).
Second, the common-carrier doctrine should continue to guide the lower courts' examination of the trade associations' claims on remand. See post, at 18, and n. 17, 30 (opinion of ALITO, J.). "[O]ur legal system and its British predecessor have long subjected certain businesses, known as common carriers, to special regulations, including a general requirement to serve all comers." Biden v. Knight First Amendment Institute at Columbia Univ., 593 U.S., (2021) (THOMAS, J., concurring in grant of certiorari) (slip op., at 3). Moreover, "there is clear historical precedent for regulating transportation and communications networks in a similar manner as traditional common carriers" given their many similarities. Id., at (slip op., at 5). Though they reached different conclusions, both the Fifth Circuit and the Eleventh Circuit appropriately strove to apply the common-carrier doctrine in assessing the constitutionality of H. B. 20 and S. B. 7072 respectively. See 49 F. 4th 439, 469-480 (CA5 2022); NetChoice v. Attorney Gen., Fla., 34 F. 4th 1196, 1219-1222 (CA11 2022).
The common-carrier doctrine may have weighty implications for the trade associations' claims. But, the same factual barriers that preclude the Court from assessing the trade associations' claims under our First Amendment precedents also prevent us from applying the common-carrier doctrine in this posture. At a minimum, we would need to pinpoint the regulated parties and specific conduct being regulated. On remand, however, both lower courts should continue to consider the common-carrier doctrine.
II
The opinions in these cases detail many of the considerable hurdles that currently preclude resolution of the trade associations' claims. See ante, at 9-10; ante, at 1-4 (BARRETT, J., concurring); post, at 22-32 (opinion of ALITO, J.). The most significant problem of all, however, has yet to be addressed: Federal courts lack authority to adjudicate the trade associations' facial challenges.
Rather than allege that the statutes impermissibly regulate them, the trade associations assert that H. B. 20 and S. B. 7072 are actually unconstitutional in most or all of their applications. This type of challenge, called a facial challenge, is "an attack on a statute itself as opposed to a particular application." Los Angeles v. Patel, 576 U.S. 409, 415 (2015).
Facial challenges are fundamentally at odds with Article III. Because Article III limits federal courts' judicial power to cases or controversies, federal courts "lac[k] the power to pronounce that [a] statute is unconstitutional" as applied to nonparties. Americans for Prosperity Foundation v. Bonta, 594 U.S. 595, 621 (2021) (THOMAS, J., concurring in part and concurring in judgment) (internal quotation marks omitted). Entertaining facial challenges in spite of that limitation arrogates powers reserved to the political branches and disturbs the relationship between the Federal Government and the States. The practice of adjudicating facial challenges creates practical concerns as well. Facial challenges' dubious historical roots further confirm that the doctrine should have no place in our jurisprudence.
A
1
Article III empowers federal courts to exercise "judicial Power" only over "Cases" and "Controversies." This Court has long recognized that those terms impose substantive constraints on the authority of federal courts. See Muskrat v. United States, 219 U.S. 346, 356-358 (1911); see also Steel Co. v. Citizens for Better Environment, 523 U.S. 83, 102 (1998). One corollary of the case-or-controversy requirement is that while federal courts can judge the constitutionality of statutes, they may do so only to the extent necessary to resolve the case at hand. "It is emphatically the province and duty of the judicial department to say what the law is," but only because "[t]hose who apply the rule to particular cases, must of necessity expound and interpret that rule." Marbury v. Madison, 1 Cranch 137, 177 (1803); see Liverpool, New York &Philadelphia S. S. Co. v. Commissioners of Emigration, 113 U.S. 33, 39 (1885) ("[The Court] has no jurisdiction to pronounce any statute . . . Accordingly, "[e]xcept when necessary" to resolve a case or controversy, "courts have no charter to review and revise legislative and executive action." Summers v. Earth Island Institute, 555 U.S. 488, 492 (2009); see United States v. Raines, 362 U.S. 17, 20-21 (1960).
These limitations on the power of judicial review play an essential role in preserving our constitutional structure. Our Constitution sets forth a "tripartite allocation of power," separating different types of powers across three coequal branches. DaimlerChrysler Corp. v. Cuno, 547 U.S. 332, 341 (2006) (internal quotation marks omitted). "[E]ach branch [is vested] with an exclusive form of power," and "no branch can encroach upon the powers confided to the others." Patchak v. Zinke, 583 U.S. 244, 250 (2018) (plurality opinion) (internal quotation marks omitted). In the Judicial Branch's case, it is vested with the "ultimate and supreme" power of judicial review. Chicago &Grand Trunk R. Co. v. Wellman, 143 U.S. 339, 345 (1892). That power includes the authority to refuse to apply a statute enacted and approved by the other two branches of the Federal Government. But, the power of judicial review can be wielded only in specific circumstances and to limited ends-to resolve cases and controversies. Without that limitation, the Judiciary would have an unchecked ability to enjoin duly enacted statutes. Respecting the case-or-controversy requirement is therefore necessary to "preven[t] the Federal Judiciary from intruding upon the powers given to the other branches, and confin[e] the federal courts to a properly judicial role." Town of Chester v. Laroe Estates, Inc., 581 U.S. 433, 438 (2017) (internal quotation marks and alteration omitted).
2
Facial challenges conflict with Article III's case-or-controversy requirement because they ask a federal court to decide whether a statute might conflict with the Constitution in cases that are not before the court.
To bring a facial challenge under our precedents, a plaintiff must ordinarily "establish that no set of circumstances exists under which the Act would be valid." United States v. Salerno, 481 U.S. 739, 745 (1987). In the First Amendment context, we have sometimes applied an even looser standard, called the overbreadth doctrine. The overbreadth doctrine requires a plaintiff to establish only that a statute "prohibits a substantial amount of protected speech," "relative to [its] plainly legitimate sweep." United States v. Williams, 553 U.S. 285, 292 (2008).
Facial challenges ask courts to issue holdings that are rarely, if ever, required to resolve a single case or controversy. The only way a plaintiff gets into a federal court is by showing that he "personally has suffered some actual or threatened injury as a result of the putatively illegal conduct of the defendant." Blum v. Yaretsky, 457 U.S. 991, 999 (1982) (internal quotation marks omitted). And, the only remedy a plaintiff should leave a federal court with is one "limited to the inadequacy that produced the injury in fact that the plaintiff has established." Lewis v. Casey, 518 U.S. 343, 357 (1996). Accordingly, once a court decides whether a statute can be validly enforced against the plaintiff who challenges it, that case or controversy is resolved. Either the court remedies the plaintiff 's injury, or it determines that the statute may be constitutionally applied to the plaintiff.
Proceeding to decide the merits of possible constitutional challenges that could be brought by other plaintiffs is not necessary to resolve that case. Instead, any holding with respect to potential future plaintiffs would be "no more than an advisory opinion-which a federal court should never issue at all, and especially should not issue with regard to a constitutional question, as to which we seek to avoid even nonadvisory opinions." Chicago v. Morales, 527 U.S. 41, 77 (1999) (Scalia, J., dissenting) (citation omitted).
Unsurprisingly, facial challenges are at odds with doctrines enforcing the case-or-controversy requirement. Pursuant to standing doctrine, for example, a plaintiff can maintain a suit in a federal court-and thus invoke judicial power-only if he has suffered an "injury" with a "traceable connection" to the "complained-of conduct of the defendant." Steel Co., 523 U.S., at 103. Facial challenges significantly relax those rules. Start with the injury requirement. Facial challenges allow a plaintiff to challenge applications of a statute that have not injured him. But see Acheson Hotels, LLC v. Laufer, 601 U.S. 1, 10 (2023) (THOMAS, J., concurring in judgment) ("To have standing, a plaintiff must assert a violation of his [own] rights"). In fact, under our First Amendment overbreadth doctrine, a plaintiff need not be injured at all; he can challenge a statute that lawfully applies to him so long as it would be unlawful to enforce it against others. See United States v. Hansen, 599 U.S. 762, 769 (2023).
Facial challenges also distort standing doctrine's redressability requirement. The Court has held that a plaintiff has standing to sue only when his "requested relief will redress the alleged injury." Steel Co., 523 U.S., at 103. With a facial challenge, however, a plaintiff seeks to enjoin every application of a statute-including ones that have nothing to do with his injury. A plaintiff can ask, "Do [I] just want [the court] to say that this statute cannot constitutionally be applied to [me] in this case, or do [I] want to go for broke and try to get the statute pronounced void in all its applications?" Morales, 527 U.S., at 77 (opinion of Scalia, J.). In this sense, the remedy sought by a facial challenge is akin to a universal injunction-a practice that is itself "inconsistent with longstanding limits on equitable relief and the power of Article III courts." Trump v. Hawaii, 585 U.S. 667, 713 (2018) (THOMAS, J., concurring); see Department of Homeland Security v. New York, 589 U.S.__, __-__(2020) (GORSUCH, J., concurring in grant of stay) (slip op., at 2-3); FDA v. Alliance for Hippocratic Medicine, 602 U.S. 367, 402 (2024) (THOMAS, J., concurring).
Because deciding the constitutionality of a statute as applied to nonparties is not necessary to resolve a case or controversy, it is beyond a federal court's constitutional authority. Federal courts have "no power per se to review and annul acts of Congress on the ground that they are unconstitutional. That question may be considered only when the justification for some direct injury suffered or threatened, presenting a justiciable issue, is made to rest upon such an act." Massachusetts v. Mellon, 262 U.S. 447, 488 (1923). Resolving facial challenges thus violates Article III.
This is not to say that federal courts can never adjudicate a constitutional claim if a plaintiff styles it as a facial challenge. Whenever a plaintiff alleges a statute is unconstitutional in many or all of its applications, that argument nearly always includes an allegation that the statute is unconstitutional as applied to the plaintiff. Federal courts are free to consider challenged statutes as applied to the plaintiff before them and limit any relief accordingly. See generally Americans for Prosperity Foundation v. Bonta, 594 U.S. 595, 618-619 (2021); id., at 621 (THOMAS, J., concurring in part and concurring in judgment).
3
Adjudicating facial challenges also intrudes upon powers reserved to the Legislative and Executive Branches and the States. When a federal court decides an issue unnecessary for resolving a case or controversy, the Judiciary assumes authority beyond what the Constitution granted. Supra, at 5-6. That necessarily alters the balance of powers: When one branch exceeds its vested power, it becomes stronger relative to the other branches. See Free Enterprise Fund v. Public Company Accounting Oversight Bd., 561 U.S. 477, 500 (2010).
Moreover, by exceeding their Article III powers, federal courts risk interfering with the executive and legislative functions. Facial challenges enable federal courts to review the constitutionality of a statute in many or all of its appli-cations-often before the statute has even been enforced. In practice, this provides federal courts a "general veto power . . . upon the legislation of Congress." Muskrat, 219 U.S., at 357. But, the Judicial Branch has no such constitutional role in lawmaking. When courts take on the supervisory role of judging statutes in the abstract, they thus "assume a position of authority over the governmental acts of another and co-equal department, an authority which plainly [they] do not possess." Mellon, 262 U.S., at 489.
Comparing the effects of as-applied challenges and facial challenges makes this point clear. With an as-applied challenge, the Judiciary intrudes only as much as necessary on the will "'of the elected representatives of the people.'" Washington State Grange v. Washington State Republican Party, 552 U.S. 442, 451 (2008). Assuming a court adheres to traditional remedial limits, a successful as-applied challenge only prevents application of the statute against that plaintiff. The Executive Branch remains free to enforce the statute in all of its other applications. And, the court's decision provides some notice to the political branches, enabling the Executive Branch to tailor future enforcement of the statute to avoid violating the Constitution or Congress to amend the statute.
Facial challenges, however, force the Judiciary to take a maximalist approach. A single plaintiff can immediately call upon a federal court to declare an entire statute unconstitutional, even before it has been applied to him. The political branches have no opportunity to correct course, making legislation an all-or-nothing proposition. The end result is that "the democratic process" is "short circuit[ed]" and "laws embodying the will of the people [are prevented] from being implemented in a manner consistent with the Constitution." Ibid.
In a similar vein, facial challenges distort the relationship between the Federal Government and the States. The Constitution "establishes a system of dual sovereignty between the States and the Federal Government." Gregory v. Ashcroft, 501 U.S. 452, 457 (1991). The States retain all powers "not delegated" to the Federal Government and not "prohibited by [the Constitution] to the States." Amdt. 10. Facial challenges can upset this division by shifting power from the States to the Federal Judiciary. Most obviously, when a state law is challenged, a facial challenge prevents that State from applying its own statute in a constitutional manner. But, facial challenges can also force federal courts to appropriate the role of state courts. To analyze whether a statute is valid on its face, a court must determine the statute's scope. If a state court has yet to determine the scope of its statute (a common occurrence with facial challenges), the federal court must do so in the first instance. Facial challenges thus increase the likelihood that federal courts must interpret novel state-law questions-a role typically and appropriately reserved for state courts.
B
In addition to their constitutional infirmities, facial challenges also create practical problems. The case-or-controversy requirement serves as the foundation of our adversarial system. Rather than "'sit[ting] as self-directed boards of legal inquiry and research,'" federal courts serve as "'arbiters of legal questions presented and argued by the parties before them.'" NASA v. Nelson, 562 U.S. 134, 147, n. 10 (2011) (quoting Carducci v. Regan, 714 F.2d 171, 177 (CADC 1983) (opinion for the court by Scalia, J.)). This system "assure[s] that the legal questions presented to the court will be resolved . . . in a concrete factual context conducive to a realistic appreciation of the consequences of judicial action." Valley Forge Christian College v. Americans United for Separation of Church and State, Inc., 454 U.S. 464, 472 (1982).
Facial challenges disrupt the adversarial system and increase the risk of judicial error as a result. A plaintiff raising a facial challenge need not have any direct knowledge of how the statute applies to others. In fact, since a facial challenge may be brought before a statute has been enforced against anyone, a plaintiff often can only guess how the statute operates-even in his own case. For this reason, "[c]laims of facial invalidity often rest on speculation," Washington State Grange, 552 U.S., at 450, and "factually barebones records," Sabri v. United States, 541 U.S. 600, 609 (2004). Federal courts are often called to give "prema- ture interpretations of statutes in areas where their constitutional application might be cloudy." Raines, 362 U.S., at 22. In short, facial challenges ask courts to resolve potentially thorny constitutional questions with little factual background and briefing by a party who may not be affected by the outcome.
C
The problems with facial challenges are particularly evident in the two cases before us. Even though the trade associations challenge two state laws, the state actors have been left out of the picture. State officials had no opportunity to tailor the laws' enforcement. Nor could the legislatures amend the statutes before they were preliminarily enjoined. In addition, neither set of state courts had a chance to interpret their own State's law or "accord [that] law a limiting construction to avoid constitutional questions." Washington State Grange, 552 U.S., at 450. Instead, federal courts construed these novel state laws in the first instance. And, they did so with little factual record to assist them. The trade associations' reliance on our questionable associational-standing doctrine is partially to blame. But, the fact that the trade associations raise facial challenges has undeniably played a significant role. With even simple fact patterns, a court has little chance of determining whether a novel, never-before-enforced state law can be constitutionally enforced against nonparties without resorting to mere speculation. For cases such as these, where the constitutional analysis depends on complex, factspecific questions, the task becomes impossible.
The trade associations do not allege that they are subject to H. B. 20 and S. B. 7072, but have brought suit to vindicate the rights of their members. There is thus not a single party in these suits that is actually regulated by the challenged statutes and can explain how specific provisions will infringe on their First Amendment rights. Instead, the trade associations assert their understanding of how the challenged statutes will regulate nonparties. As I have recently explained, "[a]ssociational standing raises constitutional concerns." See FDA v. Alliance for Hippocratic Medicine, 602 U.S. 367, 399 (2024) (concurring opinion). Associational standing appears to conflict with Article III's injury and redressability requirements in many of the same ways as facial challenges. I have serious doubts that either trade association has standing to vicariously assert a member's injury. See id., at 400.
D
Facial challenges are particularly suspect given their origins. They appear to be the product of two doctrines that are themselves constitutionally questionable, vagueness and overbreadth.
At the time of the founding, it was well understood that federal courts could hold a statute unconstitutional only insofar as necessary to resolve a particular case or controversy. See supra, at 5-6. The Founders were certainly familiar with alternative systems that provided for the free-floating review of duly enacted statutes. For example, the New York Constitution of 1777 created a Council of Revision, composed of the Governor, Chancellor, and New York Supreme Court. See Hansen, 599 U.S., at 786 (THOMAS, J., concurring). The Council of Revision could object to "any measure of a [prospective] bill" based on "not only [its] constitutionality . . . but also [its] policy." Id., at 787. If the Council lodged an objection, the Legislature's only options were to "conform to [the Council's] objections, override them by a two-thirds vote of both Houses, or simply let the bill die." Ibid. (internal quotation marks omitted).
In our Constitution, the Founders refused to create a council of revision or involve the Federal Judiciary in the business of reviewing statutes in the abstract. "Despite the support of respected delegates . . . the Convention voted against creating a federal council of revision on four different occasions. No other proposal was considered and rejected so many times." Id., at 789 (citation omitted). Instead, the Founders created a Judiciary with "only the authority to resolve private disputes between particular parties, rather than matters affecting the general public." Ibid. (internal quotation marks omitted). They considered judges "of all men the most unfit to have a veto on laws before their enactment." Ibid. (internal quotation marks omitted). Therefore, they refused to enlist judges in the business of reviewing statutes other than "as an issue for decision in a concrete case or controversy." Ibid.
"The later history of the New York Council of Revision demonstrates the wisdom of the Framers' decision." United States v. Hansen, 599 U.S. 762, 790 (2023) (THOMAS, J., concurring). The Council's ability to lodge objections proved significant: "Over the course of its existence, [the Council] returned 169 bills to the legislature; the legislature, in turn, overrode only 51 of those vetoes and reenacted at least 26 bills with modifications." Ibid. The Council did not shy away from controversial or weighty matters either. It vetoed, among other things, "a bill barring those convicted of adultery from remarrying" and a bill "declar[ing] Loyalists aliens." Ibid. In fact, the bill authorizing the Erie Canal's construction-"one of the most important measures in the Nation's history-survived the Council's review only because Chancellor James Kent changed his deciding vote at the last minute, seemingly on a whim." Ibid. Concerns over the Council's "intrusive involvement in the legislative process" eventually led to its abolition in 1820. Ibid.
For more than a century following the founding, the Court generally adhered to the original understanding of the narrow scope of judicial review. When the Court first discussed the concept of judicial review in Marbury v. Madison, it made clear that such review is limited to what is necessary for resolving "a particular cas[e]" before a court. 1 Cranch, at 177; see also supra, at 5-6. And, in case after case that followed Marbury, the Court reiterated that federal courts have no authority to reach beyond the parties before them to facially invalidate a statute.
See, e.g., Austin v. Aldermen, 7 Wall. 694, 699 (1869) (holding that the Court could "only consider the statute in connection with the case before" it and thus "our jurisdiction [wa]s at an end" once it "ascertained that [the case] wrought no effect which the act forbids"); Liverpool, New York & Philadelphia S. S. Co. v. Commissioners of Emigration, 113 U.S. 33, 39 (1885) (the Court "has no jurisdiction to pronounce any statute . . . irreconcilable with the Constitution, except as it is called upon to adjudgethe legal rights of litigants in actual controversies”); Chicago & Grand Trunk R. Co. v. Wellman, 143 U.S. 339, 345 (1892) (explaining that judicial review of a statute's constitutionality "is legitimate only in the last resort, and as a necessity in the determination of real, earnest, and vital controversy between individuals"); Muskrat v. United States, 219 U.S. 346, 357 (1911) ("[T]here [i]s no general veto power in the court upon the legislation of Congress"); Yazoo & Mississippi Valley R. Co. v. Jackson Vinegar Co., 226 U.S. 217, 219 (1912) (rejecting argument that statute was "void in toto," because the Court "must deal with the case in hand and not with imaginary ones"); Dahnke-Walker Milling Co. v. Bondurant, 257 U.S. 282, 289 (1921) ("[A] litigant can be heard to question a statute's validity only when and so far as it is being or is about to be applied to his disadvantage"); Massachusetts v. Mellon, 262 U.S. 447, 488 (1923) (Federal courts "have no power per se to review and annul acts of Congress on the ground that they are unconstitutional. That question may be considered only when the justification for some direct injury suffered or threatened, presenting a justiciable issue, is made to rest upon such an act").
As best I can tell, the Court's first departure from those principles was the development of the vagueness doctrine. See Johnson v. United States, 576 U.S. 591, 616-620 (2015) (THOMAS, J., concurring in judgment) (describing history of vagueness doctrine). Before and at the time of the founding, American and English courts dealt with vague laws by "simply refus[ing] to apply them in individual cases." Id., at 615. After the unfortunate rise of "substantive" due process, however, American courts began striking down statutes wholesale as "unconstitutionally indefinite." Id., at 617. This Court first adopted that approach in 1914, see International Harvester Co. of America v. Kentucky, 234 U.S. 216, and has since repeatedly used the vagueness doctrine "to strike down democratically enacted laws" in the name of substantive due process, Sessions v. Dimaya, 584 U.S. 148, 210 (2018) (THOMAS, J., dissenting); see Johnson, 576 U.S., at 618-621 (opinion of THOMAS, J.). As I have explained, I doubt that "our practice of striking down stat utes as unconstitutionally vague is consistent with the original meaning of the Due Process Clause." Dimaya, 584 U.S., at 206 (opinion of THOMAS, J.); see Johnson, 576 U.S., at 622 (opinion of THOMAS, J.).
The vagueness doctrine was the direct ancestor of one subset of modern facial challenges, the overbreadth doctrine. See United States v. Sineneng-Smith, 590 U.S. 371, 385 (2020) (THOMAS, J., concurring) (noting that the overbreadth doctrine "developed as a result of the vagueness doctrine's application in the First Amendment context"). In Thornhill v. Alabama, 310 U.S. 88 (1940), the Court deemed an antipicketing statute "invalid on its face" due to its "sweeping proscription of freedom of discussion." Id., at 101-106. The Thornhill Court did so "[w]ithout considering whether the defendant's actual conduct was entitled to First Amendment protection," instead invalidating the law because it "'swept within its ambit . . . activities that in ordinary circumstances constitute an exercise of freedom of speech or of the press.'" Sineneng-Smith, 590 U.S., at 383 (opinion of THOMAS, J.) (quoting Thornhill, 310 U.S., at 97; alteration omitted).
Thornhill's approach quickly gained traction in the First Amendment context. In the years to follow, the Court "invoked [its] rationale to facially invalidate a wide range of laws" concerning First Amendment rights-a practice that became known as the overbreadth doctrine. Sineneng-Smith, 590 U.S., at 383. Under that doctrine, a court can invalidate a statute if it "prohibits a substantial amount of protected speech," "relative to the statute's plainly legitimate sweep." Williams, 553 U.S., at 292. The Court has never attempted to ground the overbreadth doctrine "in the text or history of the First Amendment." Sineneng-Smith, 590 U.S., at 384 (opinion of THOMAS, J.). Instead, the Court has supplied only "policy considerations and value judgments." Ibid.
Although the Court's precedents describe an unconstitutionally overbroad statute as facially "invalid," "federal courts have no authority to erase a duly enacted law from the statute books." J. Mitchell, The Writ-of-Erasure Fallacy, 104 Va.L.Rev. 933, 936 (2018); see Sineneng-Smith, 590 U.S., at 387 (opinion of THOMAS, J.).
The overbreadth and vagueness doctrines' method of facial invalidation eventually spread to other areas of law, setting in motion our modern facial challenge doctrine. For several decades after Thornhill, the Court continued to resist the broad use of facial challenges. For example, in Broadrick v. Oklahoma, 413 U.S. 601 (1973), the Court emphasized that "[c]onstitutional judgments, as Mr. Chief Justice Marshall recognized, are justified only out of the necessity of adjudicating rights in particular cases between the litigants brought before the Court." Id., at 611. In that vein, the Court characterized "facial overbreadth adjudication [as] an exception to our traditional rules of practice." Id., at 615. But, the Court eventually entertained facial challenges more broadly where a plaintiff established that "no set of circumstances exists under which the Act would be valid." Salerno, 481 U.S., at 745. Just as with the overbreadth doctrine, the Court has yet to explain how facial challenges are consistent with the Constitution's text or history.
Some Members of the Court subsequently sought to apply a more lenient standard to all facial challenges. See Washington State Grange v. Washington State Republican Party, 552 U.S. 442, 449 (2008) (noting that "some Members of the Court have criticized the Salerno formulation"); United States v. Stevens, 559 U.S. 460, 472 (2010) (reserving the question of which standard applies to "a typical facial attack").
Given how our facial challenge doctrine seems to have de-veloped-with one doctrinal mistake leading to another-it is no wonder that facial challenges create a host of constitutional and practical issues. See supra, at 6-13. Rather than perpetuate our mistakes, the Court should end them. "No principle is more fundamental to the judiciary's proper role in our system of government than the constitutional limitation of federal-court jurisdiction to actual cases or controversies." Simon v. Eastern Ky. Welfare Rights Organization, 426 U.S. 26, 37 (1976). Because that requirement precludes courts from judging and enjoining statutes as applied to nonparties, the Court should discontinue the practice of facial challenges.
* * *
The Court has recognized the problems that facial challenges pose, emphasizing that they are "disfavored," Washington State Grange, 552 U.S., at 450, and "best when infrequent," Sabri, 541 U.S., at 608. The Court reiterates those sentiments today. Ante, at 9, 30. But, while sidelining facial challenges provides some measure of relief, it ignores the real problem. Because federal courts are bound by Article III's case-or-controversy requirement, holding a statute unconstitutional as applied to nonparties is not simply disfavored-it exceeds the authority granted to federal courts. It is high time the Court reconsiders its facial challenge doctrine.
JUSTICE ALITO, with whom JUSTICE THOMAS and JUSTICE GORSUCH join, concurring in the judgment.
The holding in these cases is narrow: NetChoice failed to prove that the Florida and Texas laws they challenged are facially unconstitutional. Everything else in the opinion of the Court is nonbinding dicta.
I agree with the bottom line of the majority's central holding. But its description of the Florida and Texas laws, as well as the litigation that shaped the question before us, leaves much to be desired. Its summary of our legal precedents is incomplete. And its broader ambition of providing guidance on whether one part of the Texas law is unconstitutional as applied to two features of two of the many platforms that it reaches-namely, Facebook's News Feed and YouTube's homepage-is unnecessary and unjustified.
But given the incompleteness of this record, there is no need and no good reason to decide anything other than the facial unconstitutionality question actually before us. After all, we do not know how the platforms "moderate" their users' content, much less whether they do so in an inherently expressive way under the First Amendment. Nevertheless, the majority is undeterred. It inexplicably singles out a few provisions and a couple of platforms for special treatment. And it unreflectively assumes the truth of NetChoice's unsupported assertion that social-media platforms-which use secret algorithms to review and moderate an almost unimaginable quantity of data today-are just as expressive as the newspaper editors who marked up typescripts in blue pencil 50 years ago.
These as-applied issues are important, and we may have to decide them before too long. But these cases do not provide the proper occasion to do so. For these reasons, I am therefore compelled to provide a more complete discussion of those matters than is customary in an opinion that concurs only in the judgment.
I
As the Court has recognized, social-media platforms have become the "modern public square." Packingham v. North Carolina, 582 U.S. 98, 107 (2017). In just a few years, they have transformed the way in which millions of Americans communicate with family and friends, perform daily chores, conduct business, and learn about and comment on current events. The vast majority of Americans use social media,and the average person spends more than two hours a day on various platforms. Young people now turn primarily to social media to get the news, and for many of them, life without social media is unimaginable. Social media may provide many benefits-but not without drawbacks. For example, some research suggests that social media are having a devastating effect on many young people, leading to depression, isolation, bullying, and intense pressure to endorse the trend or cause of the day.
J. Gottfried, Pew Research Center, Americans' Social Media Use 3 (2024). As platforms incorporate new features and technology, the number of Americans who use social media is expected to grow. S. Dixon, Statista, Social Media Users in the United States 2020-2029 (Jan. 30, 2024), https://www.statista.com/statistics/278409/number-of-social-network-users-in-the-united-states.
V. Filak, Exploring Mass. Communication: Connecting With the World of Media 210 (2024).
Social Media and News Platform Fact Sheet, Pew Research Center (Nov. 15, 2023), https://www.pewresearch.org/journalism/fact-sheet/ social-media-and-news-fact-sheet.
M. Anderson, M. Faverio, & J. Gottfried, Pew Research Center, Teens, Social Media and Technology 2023 (Dec. 11, 2023), https://www. pewresearch.org/internet/2023/12/11/teens-social-media-and-technology -2023.
Ibid.; see also J. Twenge, J. Haidt, J. Lozano, & K. Cummins, Specification Curve Analysis Shows That Social Media Use Is Linked to Poor Mental Health, Especially Among Girls, 224 Acta Psychologica 1, 8-12 (2022).
In light of these trends, platforms and governments have implemented measures to minimize the harms unique to the social-media context. Social-media companies have created user guidelines establishing the kinds of content that users may post and the consequences of violating those guidelines, which often include removing nonconforming posts or restricting noncompliant users' access to a platform.
Such enforcement decisions can sometimes have serious consequences. Restricting access to social media can impair users' ability to speak to, learn from, and do business with others. Deleting the account of an elected official or candidate for public office may seriously impair that individual's efforts to reach constituents or voters, as well as the ability of voters to make a fully informed electoral choice. And what platforms call "content moderation" of the news or user comments on public affairs can have a substantial effect on popular views.
Concerned that social-media platforms could abuse their enormous power, Florida and Texas enacted laws that prohibit them from disfavoring particular viewpoints and speakers. See S. B. 7072, 2021 Reg. Sess., §1(9) (Fla. 2021) (finding that "[s]ocial media platforms have unfairly censored . . . Floridians"); H. B. 20, 87th Leg., Called Sess. (Tex. 2021) (prohibiting the "censorship of . . . expression on social media platforms" in Texas). Both statutes have a broad reach, and it is impossible to determine whether they are unconstitutional in all their applications without surveying those applications. The majority, however, provides only a cursory outline of the relevant provisions of these laws and the litigation challenging their constitutionality. To remedy this deficiency, I will begin with a more complete summary.
A
1
I start with Florida's law, S. B. 7072, which regulates any internet platform that does "business in the state" and has either "annual gross revenues in excess of $100 million" or "at least 100 million monthly individual platform participants globally." Fla. Stat. §501.2041(1)(g) (2023). This definition is broad. There is no dispute that it covers large social-networking websites like Facebook, X, YouTube, and Instagram, but it may also reach e-commerce and other non-social-networking websites that allow users to leave reviews, ask and answer questions, or communicate with others online. These may include Uber, Etsy, PayPal, Yelp, Wikipedia, and Gmail. See, e.g., Tr. of Oral Arg. in No. 22555, pp. 54-56, 69, 76-79, 155; Brief for Wikimedia Foundation as Amicus Curiae 6; Brief for Yelp Inc. as Amicus Curiae 4, n. 4.
To prevent covered platforms from unfairly treating Floridians, S. B. 7072 imposes the following "contentmoderation" and disclosure requirements:
Content-moderation provisions. "Content moderation" is the gentle-sounding term used by internet platforms to denote actions they take purportedly to ensure that user-provided content complies with their terms of service and "community standards." The Florida law eschews this neologism and instead uses the old-fashioned term "censorship." To prevent platforms from discriminating against certain views or speakers, that law requires each regulated platform to enforce its "censorship . . . standards in a consistent manner among its users on the platform." Fla. Stat. §501.2041(2)(b). The law defines "censorship" as any action taken to: "delete, regulate, restrict, edit, alter, [or] inhibit" users from posting their own content; "post an addendum to any content or material posted by a user"; or "inhibit the ability of a user to be viewable by or to interact with another user." §501.2041(1)(b).
To prevent platforms from attempting to evade this restriction by regularly modifying their practices, the law prohibits platforms from changing their censorship "rules, terms, and agreements . . . more than once every 30 days." §501.2041(2)(c). And to give Floridians more control over how they view content on social-media websites, the law requires each platform to give its users the ability to "opt out" of its content-sorting "algorithms" and instead view posts sequentially or chronologically. §501.2041(2)(f).
As relevant here, an "algorithm" is a program that platforms use to automatically "censor" or "moderate" content that violates their terms or conditions, to organize the results of a search query, or to display posts in a feed.
Although some platforms still have employees who monitor and organize social-media feeds, for most platforms, "the incredible volume of content shared each day makes human review of each new post impossible." Brief for Developers Alliance et al. as Amici Curiae 4. Consequently, platforms rely heavily on algorithms to organize and censor content. Ibid. And it is likely that they will increasingly rely on artificial intelligence (AI), a machine learning tool that arranges, deletes, and modifies content and learns from its own choices.
In addition to barring censorship, the Florida law attempts to prevent platforms from unfairly influencing elections or distorting public discourse. To do this, it requires platforms to host candidates for public office and journalistic enterprises. §§501.2041(2)(h), (j). For the same reasons, the law also prohibits platforms from censoring posts made by or about candidates for public office. §501.2041(2)(h).
A "journalistic enterprise" is defined as any entity doing business in Florida that: (1) has published more than 100,000 words online and has at least 50,000 paid subscribers or 100,000 monthly users; (2) has published at least 100 hours of audio or video online and has at least 100 million annual viewers; (3) operates a cable channel that produces more than 40 hours of content per week to at least 100,000 subscribers; or (4) operates under a Federal Communications Commission broadcast license. Fla. Stat. §501.2041(1)(d).
Disclosure provisions. S. B. 7072 requires platforms to make both general and individual disclosures about how and when they censor the speech of Floridians. The law requires platforms to publish their content-moderation standards and to inform users of any changes. §§501.2041(2)(a), (c). And whenever a platform censors a user, S. B. 7072 requires it to: (1) notify the user of the censorship decision in writing within seven days; (2) provide "a thorough" explanation of the action and how the platform became aware of the affected content; and (3) allow the user "to access or retrieve all of the user's information, content, material, and data for at least 60 days." §§501.2041(2)(d), (i), (3).
To ensure compliance with these provisions, S. B. 7072 authorizes the Florida attorney general to bring civil and administrative actions against noncomplying platforms. §501.2041(5). The law allows the Florida Elections Commission to fine platforms that fail to host candidates for public office. Fla. Stat. §106.072(3) (2023). And the law permits aggrieved users to sue and recover up to $100,000 for each violation of the content-moderation and disclosure provisions, along with actual damages, equitable relief, punitive damages, and attorney's fees. §501.2041(6).
To protect platforms, the law provides that it "may only be enforced to the extent not inconsistent with federal law," including §203 of the Communications Decency Act of 1996. §501.2041(9). Section 230(c)(2)(A) of that Act shields internet platforms from liability for voluntary, good-faith efforts to restrict or remove content that is "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable." 47 U.S.C. §230(c)(2)(A).
2
Days after S. B. 7072's enactment, NetChoice filed suit in federal court, alleging that the new law violates the First Amendment in all its applications. As a result, NetChoice asked the District Court to enter a preliminary injunction against any enforcement of any of its provisions before the law took effect.
NetChoice also argued that S. B. 7072 is preempted by 47 U.S.C. §230(c) and is unconstitutionally vague. Those arguments are not before us because the District Court did not rule on the vagueness issue, 546 F.Supp.3d 1082, 1095 (ND Fla. 2021), and the Eleventh Circuit declined to reach the preemption issue, NetChoice v. Attorney Gen., Fla., 34 F. 4th 1196, 1209 (2022).
Florida defended the constitutionality of S. B. 7072. It argued that the law's prohibition of censorship does not violate the freedom of speech because the First Amendment permits the regulation of the conduct of entities that do not express their own views but simply provide the means for others to communicate. See Record in No. 4:21-CV-00220 (ND Fla.), Doc. 106, p. 22 (citing Rumsfeld v. Forum for Academic and Institutional Rights, Inc., 547 U.S. 47, 64 (2006) (FAIR)). And, in any event, Florida argued that NetChoice's facial challenge was likely to fail at the threshold because NetChoice had not identified which of its members were required to comply with the new law or how each of its members' presentation of third-party speech expressed that platform's own message. Record, Doc. 106, at 30, 58-59; id., Doc. 118, pp. 5, 24-25. Without this information, Florida said, it could not properly respond to NetChoice's facial claim. Id., Doc. 122, pp. 4-5. Florida requested a "meaningful opportunity to take discovery." Tr. of Oral Arg. in No. 22-277, p. 154. NetChoice objected. Record, Doc. 122.
Despite these arguments, the District Court enjoined S. B. 7072 in its entirety before the law could go into effect. Florida appealed, maintaining, among other things, that NetChoice was "unlikely to prevail on the merits of [its] facial First Amendment challenge." Brief for Appellants in No. 21-12355 (CA11), p. 20; Reply Brief in No. 21-12355 (CA11), p. 15.
With just one exception, the Eleventh Circuit affirmed. It first held that all the regulated platforms' decisions about "whether, to what extent, and in what manner to disseminate third-party created content to the public" were constitutionally protected expression. NetChoice v. Attorney Gen., Fla., 34 F. 4th 1196, 1212 (2022). Under that framing, the court found that the moderation and individual-disclosure provisions likely failed intermediate scrutiny, obviating the need to determine whether strict scrutiny applied. Id., at 1227. But the court held that the general disclosure provisions, which require only that platforms publish their censorship policies, met the intermediatescrutiny standard set forth in Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626 (1985). 34 F. 4th, at 1230. The Eleventh Circuit therefore vacated the portion of the District Court's order that enjoined the enforcement of those general-disclosure provisions, while affirming all the rest of the injunction. Id., at 1231.
See also id., at 1214 ("unless posts and users are removed randomly, those sorts of actions necessarily convey some sort of message-most obviously, the platforms' disagreement with . . . certain content"); id., at 1223 ("S.B. 7072's disclosure provisions implicate the First Amendment").
B
1
Around the same time as the enactment of the Florida law, Texas adopted a similar measure, H. B. 20, which covers "social media platform[s]" with more than 50 million monthly users in the United States. Tex. Bus. &Com. Code Ann. §120.002(b) (West 2023). The statute defines a "'[s]ocial media platform'" as an "[i]nternet website or application that is open to the public, allows a user to create an account, and enables users to communicate with other users for the primary purpose of posting information, comments, messages, or images." §120.001(1). Unlike Florida's broader law, however, Texas's statute does not cover internet-service providers, email providers, and websites that "consis[t] primarily of news, sports, entertainment, or other information or content that is not user generated but is preselected by the provider." §120.001(1)(C)(i).
To ensure "the free exchange of ideas and information," H. B. 20 requires regulated platforms to abide by the following content-moderation and disclosure requirements. Act of Sept. 2, 2021, 87th Leg., 2d Called Sess., ch. 3.
Content-moderation provisions. H. B. 20 prevents socialmedia companies from "censoring" users-that is, acting to "block, ban, remove, deplatform, demonetize, de-boost, restrict, deny equal access or visibility to, or otherwise discriminate against"-based on their viewpoint or geographic location within Texas., , Tex. Civ. Prac. &Rem. Code Ann. §§143A.001(1), 143A.002(a)(1)-(3) (West Cum. Supp. 2023). However, the law allows platforms to censor speech that: federal law "specifically authorize[s]" them to censor; speech that the platform is told sexually exploits children or survivors of sexual abuse; speech that "directly incites criminal activity or consists of specific threats of violence targeted against a person or group because of race, color, disability, religion, national origin or ancestry, age, sex or status as peace officer or judge"; and speech that is otherwise unlawful or has been the subject of a user's request for removal from his or her feed or profile. §§143A.006(a)-(b).
In general, to "deplatform" means "to remove and ban a registered user from a mass communication medium (such as a social networking or blogging website)." Merriam-Webster's Collegiate Dictionary (10th ed. 2024), (defining "deplatform"; some punctuation omitted), https:// unabridged.merriam-webster.com/collegiate/deplatform (unless otherwise noted, all internet sites last accessed May 22, 2024).
"[D]emonetization" often refers to the act of preventing "online content from earning revenue (as from advertisements)." Ibid. (defining "demonetize"; some punctuation omitted), https://unabridged.merriam-webster.com/collegiate/demonetize.
"Boosting on social media means [paying] a platform to amplify . . . posts for more reach." C. Williams, HubSpot, Social Media Definitions: The Ultimate Glossary of Terms You Should Know (June 23, 2023), https://blog.hubspot.com/marketing/social-media-terms. De-boosting thus usually refers to when platforms refuse to continue increasing a post's or user's visibility to other users.
Disclosure provisions. Like the Florida law, H. B. 20 also requires platforms to make general and individual disclosures about their censorship practices. Specifically, the law obligates each platform to tell the public how it "targets," "promotes," and "moderates" content. §§120.051(a)(1)-(3). And whenever a platform censors a user, the law requires it to inform the user why that was done. §120.103(a)(1).
Texas has represented that a brief computer-generated notification to an affected user would satisfy the provision's notification requirement. Brief for Respondent in No. 22-555, p. 44.
Platforms must allow users to appeal removal decisions through "an easily accessible complaint system;" resolve such appeals within 14 business days (unless an enumerated exception applies); and, if the appeal is successful, provide "the reason for the reversal." §§120.101, 120.103(a)(2), (a)(3)(B)-(b), 120.104.
Users may sue any platform that violates these provisions, as may the Texas attorney general. §143A.007(d). But unlike the Florida law, H. B. 20 authorizes only injunctive relief. §§143A.007(a), 143A.008. It contains a strong severability provision, §8(a), which reaches "every provision, section, subsection, sentence, clause, phrase, or word in th[e] Act, and every application of [its] provisions."
2
As it did in the Florida case, NetChoice sought a preliminary injunction in federal court, claiming that H. B. 20 violates the First Amendment in its entirety. In response, Texas argued that because H. B. 20 regulates NetChoice's members "in their operation as publicly accessible conduits for the speech of others" rather than "as authors or editors" of their own speech, NetChoice could not prevail. Record in No. 1:21-CV-0O84O (WD Tex.), Doc. 39, p. 23. But even if the platforms might have the right to use algorithms to censor their users' speech, the State argued, the question of "what these algorithms are doing is a critical, and so far, unexplained, aspect of this case." Id., at 24. This deficiency mattered, Texas contended, because the platforms could succeed on their facial challenge only by showing that "all algorithms used by the Platforms are for the purposes of expressing viewpoints of those Platforms." Id., at 27. And because NetChoice had not even explained what its members' algorithms did, much less whether they did so in an expressive way, Texas argued that NetChoice had not shown that "all applications of H.B. 20 are unconstitutional." Ibid.; see also id., Doc. 53, at 13 (arguing that NetChoice had failed to show that "H. B. 20 is . . . unconstitutional in all its applications" because "a number" of NetChoice's members had conceded that the law did "not burden or chill their speech").
To clarify these and other "threshold issues," Texas moved for expedited discovery. Id., Doc. 20, at 1. The District Court granted Texas's motion in part, but after one month of discovery, it sided with NetChoice and enjoined H. B. 20 in its entirety before it could go into effect. Texas appealed, arguing that despite the District Court's judgment to the contrary, "[l]aws requiring commercial entities to neutrally host speakers generally do not even implicate the First Amendment because they do not regulate the host's speech at all-they regulate its conduct." Brief for Appellant in No. 21-51178 (CA5), p. 16. The State also emphasized NetChoice's alleged failure to show that H. B. 20 was unconstitutional in even a "'substantial number of its applications,'" the "bare minimum" showing that NetChoice needed to make to prevail on its facial challenge. E.g., Reply Brief in No. 21-51178 (CA5), p. 8 (quoting Americans for Prosperity Foundation v. Bonta, 594 U.S. 595, 615 (2021)).
A divided Fifth Circuit panel reversed, focusing primarily on NetChoice's failure to "even try to show that HB 20 is 'unconstitutional in all of its applications.'" 49 F. 4th 439, 449 (2022) (quoting Washington State Grange v. Washington State Republican Party, 552 U.S. 442, 449 (2008)). The court also accepted Texas's argument that H. B. 20 "does not regulate the Platforms' speech at all" because "the Platforms are not 'speaking' when they host other people's speech." 49 F. 4th, at 448. Finally, the court upheld the law's disclosure requirements on the ground that they involve the disclosure of the type of purely factual and uncontroversial information that may be compelled under Zau-derer. 49 F. 4th, at 485.
II
NetChoice contends that the Florida and Texas statutes facially violate the First Amendment, meaning that they cannot be applied to anyone at any time under any circumstances without violating the Constitution. Such challenges are strongly disfavored. See Washington State Grange, 552 U.S., at 452. They often raise the risk of "'premature interpretatio[n] of statutes' on the basis of factually barebones records." Sabri v. United States, 541 U.S. 600, 609 (2004). They clash with the principle that courts should neither "'anticipate a question of constitutional law in advance of the necessity of deciding it'" nor "'formulate a rule of constitutional law broader than is required by the precise facts to which it is to be applied.'" Ashwander v. TVA, 297 U.S. 288, 346-347 (1936) (Brandeis, J., concurring). And they "threaten to short circuit the democratic process by preventing laws embodying the will of the people from being implemented in a manner consistent with the Constitution." Washington State Grange, 552 U.S., at 451.
Facial challenges also strain the limits of the federal courts' constitutional authority to decide only actual "Cases" and "Controversies." Art. III, §2. "[L]itigants typically lack standing to assert the constitutional rights of third parties." United States v. Hansen, 599 U.S. 762, 769 (2023). But when a court holds that a law cannot be enforced against anyone under any circumstances, it effectively grants relief with respect to unknown parties in disputes that have not yet materialized.
For these reasons, we have insisted that parties mounting facial attacks satisfy demanding requirements. In United States v. Salerno, 481 U.S. 739, 745 (1987), we held that a facial challenger must "establish that no set of circumstances exists under which the [law] would be valid." "While some Members of the Court have criticized the Salerno formulation," all have agreed "that a facial challenge must fail where the statute has a "'plainly legitimate sweep."'" Washington State Grange, 552 U.S., at 449. In First Amendment cases, we have sometimes phrased the requirement as an obligation to show that a law "'prohibits a substantial amount of protected speech'" relative to its' "plainly legitimate sweep.'" Hansen, 599 U.S., at 770; Bonta, 594 U.S., at 615; United States v. Williams, 553 U.S. 285, 292-293 (2008).
At oral argument, NetChoice represented that "it's the plainly legitimate sweep test, which is not synonymous with overbreadth," that governs these cases. See Tr. of Oral Arg. in No. 22-277, p. 70; contra, ante, at 9 (suggesting that the overbreadth doctrine applies to all facial challenges brought under the First Amendment, including these cases). This representation makes sense given that the overbreadth doctrine applies only when there is "a realistic danger that the statute itself will significantly compromise recognized First Amendment protections of parties not before the Court." Members of City Council of Los Angeles v. Taxpayers for Vincent, 466 U.S. 789, 801 (1984). And here, NetChoice appears to represent all-or nearly all-regulated parties.
NetChoice and the Federal Government urge us not to apply any of these demanding tests because, they say, the States disputed only the "threshold question" whether their laws "cover expressive activity at all." Tr. of Oral Arg. in No. 22-277, at 76; see also id., at 84, 125; Tr. of Oral Arg. in No. 22-555, at 92. The Court unanimously rejects that argument-and for good reason.
First, the States did not "put all their eggs in [one] basket." Tr. of Oral Arg. in No. 22-277, at 76. To be sure, they argued that their newly enacted laws were valid in all their applications. Ibid. Both the Federal Government and the States almost always defend the constitutionality of all provisions of their laws. But Florida and Texas did not stop there. Rather, as noted above, they went on to argue that NetChoice had failed to make the showing required for a facial challenge. Therefore, the record does not support NetChoice's attempt to use "the party presentation rules" as grounds for blocking our consideration of the question whether it satisfied the facial constitutionality test. Tr. of Oral Arg. in No. 22-555, at 92.
See Reply Brief in No. 21-12355 (CA11), p. 15 ("Plaintiffs-in their facial challenge-have failed to demonstrate that even a significant subset of covered social media platforms engages in [expressive] conduct." See also Brief for Appellants in No. 21-12355 (CA11), p. 20 (NetChoice is "unlikely to prevail on the merits of [its] facial First Amendment challenge"); Record in No. 4:21-CV-00220 (ND Fla.), Doc. 106, p. 30 ("Plaintiffs have not demonstrated that their members actually [express a message]," so there is "not a basis for sustaining Plaintiffs' facial constitutional challenge"); Reply Brief in No. 21-51178 (CA5), p. 8 (arguing that NetChoice failed "to show at a bare minimum that [S. B. 20] is unconstitutional in a 'substantial number of its applications'" (quoting Americans for Prosperity Foundation v. Bonta, 594 U.S. 595, 615 (2021))); Record in No. 1:21-CV-00840 (WD Tex.), Doc. 39, p. 27 (because "not all applications of H.B. 20 are unconstitutional," "Plaintiffs' delayed facial challenge [can]not succeed").
Second, even if the States had not asked the lower courts to reject NetChoice's request for blanket relief, it would have been improper for those courts to enjoin all applications of the challenged laws unless that test was met. "It is one thing to allow parties to forfeit claims, defenses, or lines of argument; it would be quite another to allow parties to stipulate or bind [a court] to the application of an incorrect legal standard." Gardner v. Galetka, 568 F.3d 862, 879 (CA10 2009); see also Kairys v. Southern Pines Trucking, Inc., 75 F. 4th 153, 160 (CA3 2023) ("But parties cannot forfeit the application of 'controlling law'"); United States v. Escobar, 866 F.3d 333, 339, n. 13 (CA5 2017) (per curiam) ("'A party cannot waive, concede, or abandon the applicable standard of review'" (quoting Ward v. Stephens, 777 F.3d 250, 257, n. 3 (CA5 2015)).
Represented by sophisticated counsel, NetChoice made the deliberate choice to mount a facial challenge to both laws, and in doing so, it obviously knew what it would have to show in order to prevail. NetChoice decided to fight these laws on these terms, and the Court properly holds it to that decision.
III
I therefore turn to the question whether NetChoice established facial unconstitutionality, and I begin with the States' content-moderation requirements. To show that these provisions are facially invalid, NetChoice had to demonstrate that they lack a plainly legitimate sweep under the First Amendment. Our precedents interpreting that Amendment provide the numerator (the number of unconstitutional applications) and denominator (the total number of possible applications) that NetChoice was required to identify in order to make that showing. Estimating the numerator requires an understanding of the First Amendment principles that must be applied here, and I therefore provide a brief review of those principles.
A
The First Amendment protects "the freedom of speech," and most of our cases interpreting this right have involved government efforts to forbid, restrict, or compel a party's own oral or written expression. Agency for Int'l Development v. Alliance for Open Society Int'l, Inc., 570 U.S. 205, 213 (2013); Wooley v. Maynard, 430 U.S. 705, 714 (1977); West Virginia Bd. of Ed. v. Barnette, 319 U.S. 624, 642 (1943). Some cases, however, have involved another aspect of the free speech right, namely, the right to "presen[t] . . . an edited compilation of speech generated by other persons" for the purpose of expressing a particular message. See Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557, 570 (1995). As used in this context, the term "compilation" means any effort to present the expression of others in some sort of organized package. See ibid.
An example such as the famous Oxford Book of English Poetry illustrates why a compilation may constitute expression on the part of the compiler. The editors' selection of the poems included in this volume expresses their view about the poets and poems that most deserve the attention of their anticipated readers. Forcing the editors to exclude or include a poem could alter the expression that the editors wish to convey.
Not all compilations, however, have this expressive characteristic. Suppose that the head of a neighborhood group prepares a directory consisting of contact information submitted by all the residents who want to be listed. This directory would not include any meaningful expression on the part of the compiler.
Because not all compilers express a message of their own, not all compilations are protected by the First Amendment. Instead, the First Amendment protects only those compilations that are "inherently expressive" in their own right, meaning that they select and present speech created by other persons in order "to spread [the compiler's] own message." FAIR, 547 U.S., at 66; Pacific Gas &Elec. Co. v. Public Util. Comm'n of Cal., 475 U.S. 1, 10 (1986) (PG&E) (plurality opinion). If a compilation is inherently expressive, then the compiler may have the right to refuse to accommodate a particular speaker or message. See Hurley, 515 U.S., at 573. But if a compilation is not inherently expressive, then the government can require the compiler to host a message or speaker because the accommodation does not amount to compelled speech. Id., at 578-581.
To show that a hosting requirement would compel speech and thereby trigger First Amendment scrutiny, a claimant must generally show three things.
1
First, a claimant must establish that its practice is to exercise "editorial discretion in the selection and presentation" of the content it hosts. Arkansas Ed. Television Comm'n v. Forbes, 523 U.S. 666, 674 (1998); Hurley, 515 U.S., at 574; ante, at 14. NetChoice describes this process as content "curation." But whatever you call it, not all compilers do this, at least in a way that is inherently expressive. Some may serve as "passive receptacle[s]" of third-party speech or as "dumb pipes" that merely emit what they are fed. Such entities communicate no message of their own, and accordingly, their conduct does not merit First Amendment protection. Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241, 258 (1974).
American Broadcasting Cos. v. Aereo, Inc., 573 U.S. 431, 458 (2014) (Scalia, J., dissenting).
The majority states that it is irrelevant whether "a compiler includes most items and excludes just a few." Ante, at 18. That may be true if the compiler carefully reviews, edits, and selects a large proportion of the items it receives. But if an entity, like some "sort of community billboard, regularly carr[ies] the messages of third parties" instead of selecting only those that contribute to a common theme, then this information becomes highly relevant. PG&E, 475 U.S. 1, 23 (1986) (Marshall, J., concurring in judgment). Entities that have assumed the role of common carriers fall into this category, for example. And the States defend portions of their laws on the ground that at least some social-media platforms have taken on that role. The majority brushes aside that argument without adequate consideration.
Determining whether an entity should be viewed as a "curator" or a "dumb pipe" may not always be easy because different aspects of an entity's operations may take different approaches with respect to hosting third-party speech. The typical newspaper regulates the content and presentation of articles authored by its employees or others, PG&E, 475 U.S., at 8, but that same paper might also run nearly all the classified advertisements it receives, regardless of their content and without adding any expression of its own. Compare Tornillo, 418 U.S. 241, with Pittsburgh Press Co. v. Pittsburgh Comm'n on Human Relations, 413 U.S. 376 (1973). These differences may be significant for First Amendment purposes.
The same may be true for a parade organizer. For example, the practice of a parade organizer may be to select the groups that are admitted, but not the individuals who are allowed to march as members of admitted groups. Hurley, 515 U.S., at 572-574. In such a case, each of these practices would have to be analyzed separately.
2
Second, the host must use the compilation of speech to express "some sort of collective point"-even if only at a fairly abstract level. Id., at 568. Thus, a parade organizer who claims a First Amendment right to exclude certain groups or individuals would need to show at least that the message conveyed by the groups or individuals who are allowed to march comport with the parade's theme. Id., at 560, 574. A parade comprising "unrelated segments" that lumber along together willy-nilly would likely not express anything at all. Id., at 576. And although "a narrow, succinctly articulable message is not a condition of constitutional protection," compilations that organize the speech of others in a non-expressive way (e.g., chronologically) fall "beyond the realm of expressi[on]." Id., at 569; contra, ante, at 17-18.
Our decision in PruneYard illustrates this point. In that case, the Court held that a mall could be required to host third-party speech (i.e., to admit individuals who wanted to distribute handbills or solicit signatures on petitions) because the mall's admission policy did not express any message, and because the mall was "open to the public at large." PruneYard Shopping Center v. Robins, 447 U.S. 74, 83, 8788 (1980); 303 Creative LLC v. Elenis, 600 U.S. 570, 590 (2023). In such circumstances, we held that the First Amendment is not implicated merely because a host objects to a particular message or viewpoint. See PG&E, 475 U.S., at 12.
3
Finally, a compiler must show that its "own message [is] affected by the speech it [is] forced to accommodate." FAIR, 547 U.S., at 63. In core examples of expressive compilations, such as a book containing selected articles, chapters, stories, or poems, this requirement is easily satisfied. But in other situations, it may be hard to identify any message that would be affected by the inclusion of particular third-party speech.
Two precedents that the majority tries to downplay, if not forget, are illustrative. The first is PruneYard, which I have already discussed. The PruneYard Court rejected the mall's First Amendment claim because "[t]he views expressed by members of the public in passing out pamphlets or seeking signatures for a petition [were] not likely [to] be identified with those of the owner." 447 U.S., at 87. And if those who perused the handbills or petitions were not likely to make that connection, any message that the mall owner intended to convey would not be affected.
The decision in FAIR rested on similar reasoning. In that case, the Court did not dispute the proposition that the law schools' refusal to host military recruiters expressed the message that the military should admit and retain gays and lesbians. But the Court found no First Amendment violation because, as in PruneYard, it was unlikely that the views of the military recruiters "would be identified with" those of the schools themselves, and consequently, hosting the military recruiters did not "sufficiently interfere with any message of the school." 547 U.S., at 64-65; contra, ante, at 25 ("[T]his Court has never hinged a compiler's First Amendment protection on the risk of misattribu-tion.").
To be sure, in Turner Broadcasting System, Inc. v. FCC, 512 U.S. 622, 655 (1994), we held that the First Amendment applied even though there was "little risk" of misattribution in that case. But that is only because the claimants in that case had already shown that the Cable Act affected the quantity or reach of the messages that they communicated through "original programming" or television programs produced by others. Id., at 636 (internal quotation marks omitted). In cases not involving core examples of expressive compilations, such as in PruneYard and FAIR, a compiler's First Amendment protection has very much turned on the risk of misattribution.
B
A party that challenges government interference with its curation of content cannot win without making the three-part showing just outlined, but such a showing does not guarantee victory. To prevail, the party must go on and show that the challenged regulation of its curation practices violates the applicable level of First Amendment scrutiny.
Our decision in Turner makes that clear. Although the television cable operators in that case made the showing needed to trigger First Amendment scrutiny, they did not ultimately prevail on their facial challenge to the Cable Act. After a remand and more than 18 months of additional factual development, the Court held that the law was adequately tailored to serve legitimate and important government interests, including "promoting the widespread dissemination of information from a multiplicity of sources." Turner Broadcasting System, Inc. v. FCC, 520 U.S. 180, 189 (1997). Here, the States assert a similar interest in fostering a free and open marketplace of ideas.
Contrary to the majority's suggestion, ante, at 27, this is not the only interest that Texas asserted. Texas has also invoked its interest in preventing platforms from discriminating against speakers who reside in Texas or engage in certain forms of off-platform speech. Brief for Respondent in No. 22-555, at 15. The majority opinion does not mention these features, much less the interests that Texas claims they serve. Texas also asserts an interest in preventing common carriers from engaging in" 'invidious discrimination in the distribution of publicly available goods, services, and other advantages.'" Id., at 18. These are "compelling state interests of the highest order" too. Roberts v. United States Jaycees, 468 U.S. 609, 624 (1984).
C
With these standards in mind, I proceed to the question whether the content-moderation provisions are facially valid. For the following three reasons, NetChoice failed to meet its burden.
1
First, NetChoice did not establish which entities the statutes cover. This failure is critical because it is "impossible to determine whether a statute reaches too far without first knowing what the statute covers." Williams, 553 U.S., at 293. When it sued Florida, NetChoice was reluctant to disclose which of its members were covered by S. B. 7072. Instead, it filed declarations revealing only that the law reached "Etsy, Facebook, and YouTube." Tr. of Oral Arg. in No. 22-277, at 32. In this Court, NetChoice was a bit more forthcoming, representing that S. B. 7072 also covers Instagram, X, Pinterest, Reddit, Gmail, Uber, and other e-commerce websites. Id., at 69, 76; Brief for Respondents in No. 22-277, at 7, 38, 49. But NetChoice has still not provided a complete list.
This concession suggests that S. B. 7072 may "cover websites that engage in primarily non-expressive conduct." Tr. of Oral Arg. in No. 22277, at 34.
NetChoice was similarly reluctant to identify its affected members in the Texas case. At first, NetChoice "represented . . . that only Facebook, YouTube, and [X] are affected by the Texas law." Brief for Appellant in No. 2151178 (CA5), at 1, n. 1. But in its brief in this Court, NetChoice told us that H. B. 20 also regulates "some of the Internet's most popular websites, including Facebook, Instagram, Pinterest, TikTok, Vimeo, X (formerly known as Twitter), and YouTube." Brief for Petitioners in No. 22 555, p. 1. And websites such as Discord, Reddit, Wik-ipedia, and Yelp have filed amicus briefs claiming that they may be covered by both the Texas and Florida laws.
Brief for Discord Inc. as Amicus Curiae 2, 21-27. "Discord is a real time messaging service with over 150 million active monthly users who communicate within a huge variety of interest-based communities, or 'servers.'" Id., at 1.
Brief for Reddit, Inc., as Amicus Curiae 2. Reddit is an online forum that allows its "users to establish and enforce their own rules governing what topics are acceptable and how those topics may be discussed .... The display of content on Reddit is thus primarily driven by humans- not by centralized algorithms." Ibid.
Brief for Wikimedia Foundation as Amicus Curiae 2.
Brief for Yelp Inc. as Amicus Curiae 3-4.
It is a mystery how NetChoice could expect to prevail on a facial challenge without candidly disclosing the platforms that it thinks the challenged laws reach or the nature of the content moderation they practice. Without such information, we have no way of knowing whether the laws at issue here "cover websites that engage in primarily non-expressive conduct." Tr. of Oral Arg. in No. 22-277, at 34; see also id., at 126. For example, among other things, NetChoice has not stated whether the challenged laws reach websites like WhatsApp and Gmail, which carry messages instead of curating them to create an independent speech product. Both laws also appear to cover Reddit and BeReal, and websites like Parler, which claim to engage in little or no content moderation at all. And Florida's law, which is even broader than Texas's, plainly applies to e-commerce platforms like Etsy that make clear in their terms of service that they are "not a curated marketplace."
About WhatsApp, WhatsApp, https://whatsapp.com/about (last accessed Apr. 23, 2024).
Secure, Smart, and Easy To Use Email, Gmail, https://google.com/ gmail/about (last accessed Apr. 23, 2024).
Reddit Content Policy, Reddit, https://www.redditinc.com/policies /content-policy (last accessed Apr. 23, 2024) (describing Reddit as a platform that is run and moderated by its users).
BeReal, which appears to have enough monthly users to be covered by the Texas law, allows users to share a photo with their friends once during a randomly selected 2-minute window each day. Time To BeReal, https://help.bereal.com/hc/en-us/articles/7350386715165--Time-to-BeReal (last accessed Apr. 23, 2024). Twenty-four hours later, those photos disappear. Because BeReal posts thus appear and disappear "randomly," even the Eleventh Circuit would agree that BeReal likely is not an expressive compilation. 34 F. 4th, at 1214.
Community Guidelines, Parler, https://www.parler.com/community-guidelines (May 31, 2024) ("We honor the ability of all users to freely express themselves without interference from oppressive censorship or manipulation"). Parler probably does not have a sufficient number of monthly users to be covered by these statutes. But it is possible that other covered websites use a similar business model.
Our House Rules, Etsy, https://etsy.com/legal/prohibited (last accessed Apr. 23, 2024).
In First Amendment terms, this means that these laws- in at least some of their applications-appear to regulate the kind of "passive receptacle[s]" of third-party speech that receive no First Amendment protection. Tornillo, 418 U.S., at 258. Given such uncertainty, it is impossible for us to determine whether these laws have a "plainly legitimate sweep." Williams, 553 U.S., at 292; Washington State Grange, 552 U.S., at 449.
2
Second, NetChoice has not established what kinds of content appear on all the regulated platforms, and we cannot determine whether these platforms create an "inherently expressive" compilation of third-party speech until we know what is being compiled.
We know that social-media platforms generally allow their users to create accounts; send direct messages through private inboxes; post written messages, photos, and videos; and comment on, repost, or otherwise interact with other users' posts. And NetChoice acknowledges in fairly general terms that its members engage in most- though not all-of these functions. But such generalities are insufficient.
For one thing, the ways in which users post, send direct messages, or interact with content may differ in meaningful ways from platform to platform. And NetChoice's failure to account for these differences may be decisive. To see how, consider X and Yelp. Both platforms allow users to post comments and photos, but they differ in other respects. X permits users to post (or "Tweet") on a broad range of topics because its "purpose is to serve the public conversation,"and as a result, many elected officials use X to communicate with constituents. Yelp, by contrast, allows users to post comments and pictures only for the purpose of advertising local businesses or providing "firsthand accounts" that reflect their "consumer experience" with businesses. It does not permit "rants about political ideologies, a business's employment practices, extraordinary circumstances, or other matters that don't address the core of the consumer experience."
Yelp and X are both covered by S. B. 7072 and H. B. 20. See Brief for Yelp Inc. as Amicus Curiae 4, n. 4.
The X Rules, X, https://help.x.com/en/rules-and-policies/x-rules (last accessed Apr. 23, 2024).
Content Guidelines, Yelp, https://www.yelp.com/guidelines (last accessed Apr. 23, 2024).
Ibid.
As this example shows, X's content is more political than Yelp's, and Yelp's content is more commercial than X's. That difference may be significant for First Amendment purposes. See Pittsburgh Press, 413 U.S. 376. But NetChoice has not developed the record on that front. Nor has it shown what kinds of content appear across the diverse array of regulated platforms.
Social-media platforms are diverse, and each may be unique in potentially significant ways. On the present record, we are ill-equipped to account for the many platformspecific features that allow users to do things like sell or purchase goods, live-stream events, request a ride, arrange a date, create a discussion forum, wire money to friends, play a video game, hire an employee, log a run, or agree to watch a dog. The challenged laws may apply differently to these different functions, which may present different First Amendment issues. A court cannot invalidate the challenged laws if it has to speculate about their applications.
E.g., Facebook Marketplace, Etsy.
E.g., X Live, Twitch.
E.g., Uber, Lyft.
E.g., Facebook Dating, Tinder.
E.g., Reddit, Quora.
E.g., Meta Pay, Venmo, PayPal.
E.g., Metaverse, Discord.
E.g., Indeed, LinkedIn.
E.g., Strava.
E.g., Rover.
3
Third, NetChoice has not established how websites moderate content. NetChoice alleges that "[c]overed websites" generally use algorithms to organize and censor content appearing in "search results, comments, or in feeds." Brief for Petitioners in No. 22-555, at 4, 6. But at this stage and on this record, we have no way of confirming whether all of the regulated platforms use algorithms to organize all of their content, much less whether these algorithms are expressive. See Hurley, 515 U.S., at 568. Facebook and Reddit, for instance, both allow their users to post about a wide range of topics. But while Facebook uses algorithms to arrange and moderate its users' posts, Reddit asserts that its content is moderated by Reddit users, "not by centralized algorithms." Brief for Reddit, Inc., as Amicus Curiae 2. If Reddit and other platforms entirely outsource curation to others, they can hardly claim that their compilations express their own views.
Community Standards, Facebook, https://transparency.meta.com/ policies/community-standards ("[Facebook] wants people to be able to talk openly about the issues that matter to them, whether through written comments, photos, music, or other artistic mediums"); Brief for Reddit, Inc., as Amicus Curiae 12 ("[T]he Reddit platform as a whole accommodates a wide range of communities and modes of discourse").
Perhaps recognizing this, NetChoice argues in passing that it cannot tell us how its members moderate content because doing so would embolden "malicious actors" and divulge "proprietary and closely held" information. E.g., Brief for Petitioners in No. 22-555, at 11. But these harms are far from inevitable. Various platforms already make similar disclosures-both voluntarily and to comply with the European Union's Digital Services Act-yet the sky has not fallen. And on remand, NetChoice will have the opportunity to contest whether particular disclosures are necessary and whether any relevant materials should be filed under seal.
Comm'n Reg. 2022/2065, Art. 17, 2022 O. J. (L. 277) 51-52. NetChoice does not dispute the States' assertion that the regulated platforms are required to comply with this law. Compare Brief for Petitioners in No. 22-277, p. 49, with Reply Brief in No. 22-277, p. 24; Tr. of Oral Arg. in No. 22-555, pp. 20-21. If, on remand, the States show that the platforms have been able to comply with this law in Europe without having to forgo "exercising editorial discretion at all," Brief for Respondents in No. 22-277, p. 40, then that might help them prove that their disclosure laws are not "unduly burdensome" under Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626 (1985).
Various NetChoice members already disclose in broad strokes how they use algorithms to curate content. Many platforms claim to use algorithms to identify and remove violent, obscene, sexually explicit, and false posts that violate their community guidelines. Brief for Developers Alliance et al. as Amici Curiae 11. Some platforms-like X, for instance-say they use algorithms, not for the purpose of removing all nonconforming speech, but to "promot[e] counterspeech" that "presents facts to correct misstatements" or "denounces hateful or dangerous speech." Still others, like Parler, Reddit, and Signal Messenger, say they engage in little or no content moderation.
Our Approach to Policy Development and Enforcement Philosophy, X, http://www.help.x.com/en/rules-and-policies/enforcement-philosophy.
Community Guidelines, Parler, https://www.parler.com/community-guidelines.
Reddit Content Policy, Reddit, https://www.redditinc.com/policies /content-policy.
Signal Terms & Privacy Policy, Signal Messenger (May 25, 2018), https://www.signal.org/legal.
Some platforms have also disclosed that they use algorithms to help their users find relevant content. The e-com-merce platform Etsy, for instance, uses an algorithm that matches a user's search terms to the "attributes" that a seller ascribes to its wares. Etsy's algorithm also accounts for things like the date of the seller's listing, the proximity of the seller and buyer, and the quality of the seller's customer-service ratings. Ibid.
How Etsy Search Works, Etsy Help Center, https://help.etsy.com/hc/ en-us/articles/115015745428-How-Etsy-Search-Works?segment'seUing (visited Apr. 9, 2024).
YouTube says it answers search queries based on "relevance, engagement and quality"-taking into account how well a search query matches a video title, the kinds of videos a particular user viewed in the past, and each creator's "expertise, authoritativeness, and trustworthiness on a given topic."
YouTube Search, https://www.youtube.com/howyoutubeworks/ product-features/search (last accessed Apr. 23, 2024). Unlike many other platforms, YouTube does not accept payment for better placement within organic search
These disclosures suggest that platforms can say something about their content-moderation practices without enabling malicious actors or disclosing proprietary information. They also suggest that not all platforms curate all third-party content in an inherently expressive way. Without more information about how regulated platforms moderate content, it is not possible to determine whether these laws lack "a' "plainly legitimate sweep."'" Washington State Grange, 552 U.S., at 449.
For all these reasons, NetChoice failed to establish whether the content-moderation provisions violate the First Amendment on their face.
D
Although the only question the Court must decide today is whether NetChoice showed that the Florida and Texas laws are facially unconstitutional, much of the majority opinion addresses a different question: whether the Texas law's content-moderation provisions are constitutional as applied to two features of two platforms-Facebook's News Feed and YouTube's homepage. The opinion justifies this discussion on the ground that the Fifth Circuit cannot apply the facial constitutionality test without resolving that question, see, e.g., ante, at 13, 30, but that is not necessarily true. Especially in light of the wide reach of the Texas law, NetChoice may still fall far short of establishing facial unconstitutionality-even if it is assumed for the sake of argument that the Texas law is unconstitutional as applied to Facebook's News Feed and YouTube's homepage.
This problem is even more pronounced for the Florida law, which covers more platforms and conduct than the Texas law.
For this reason, the majority's "guidance" on this issue may well be superfluous. Yet superfluity is not its most egregious flaw. The majority's discussion also rests on wholly conclusory assumptions that lack record support.
For example, the majority paints an attractive, though simplistic, picture of what Facebook's News Feed and YouTube's homepage do behind the scenes. Taking NetChoice at its word, the majority says that the platforms' use of algorithms to enforce their community standards is per se expressive. But the platforms have refused to disclose how these algorithms were created and how they actually work. And the majority fails to give any serious consideration to key arguments pressed by the States. Most notable is the majority's conspicuous failure to address the States' contention that platforms like YouTube and Face-book-which constitute the 21st century equivalent of the old "public square"-should be viewed as common carriers. See Biden v. Knight First Amendment Institute at Columbia University, 593 U.S.__,__ (2021) (Thomas, J., concurring) (slip op., at 6). Whether or not the Court ultimately accepts that argument, it deserves serious treatment.
Instead of seriously engaging with this and other arguments, the majority rests on NetChoice's dubious assertion that there is no constitutionally significant difference between what newspaper editors did more than a half-century ago at the time of Tornillo and what Facebook and YouTube do today.
Maybe that is right-but maybe it is not. Before mechanically accepting this analogy, perhaps we should take a closer look.
Let's start with size. Currently, Facebook and YouTube each produced-on a daily basis-more than four petabytes (4,000,000,000,000,000 bytes) of data. By my calculation, that is roughly 1.3 billion times as many bytes as there are in an issue of the New York Times.
Breaking Down the Numbers: How Much Data Does the World Create Daily in 2024? Edge Delta (Mar. 11, 2024), https://www. edgedelta.com/company/blog/how-much-data-is-created-per-day.
The average issue of the New York Times, excluding ads, contains about 150,000 words. A typical word consists of 10 to 20 bytes. Therefore, the average issue of the New York Times contains around 3 million bytes.
No human being could possibly review even a tiny fraction of this gigantic outpouring of speech, and it is therefore hard to see how any shared message could be discerned. And even if someone could view all this data and find such a message, how likely is it that the addition of a small amount of discordant speech would change the overall message?
Now consider how newspapers and social-media platforms edit content. Newspaper editors are real human beings, and when the Court decided Tornillo (the case that the majority finds most instructive), editors assigned articles to particular reporters, and copyeditors went over typescript with a blue pencil. The platforms, by contrast, play no role in selecting the billions of texts and videos that users try to convey to each other. And the vast bulk of the "curation" and "content moderation" carried out by platforms is not done by human beings. Instead, algorithms remove a small fraction of nonconforming posts post hoc and prioritize content based on factors that the platforms have not revealed and may not even know. After all, many of the biggest platforms are beginning to use AI algorithms to help them moderate content. And when AI algorithms make a decision, "even the researchers and programmers creating them don't really understand why the models they have built make the decisions they make." Are such decisions equally expressive as the decisions made by humans? Should we at least think about this?
T. Xu, AI Makes Decisions We Don't Understand-That's a Problem, (Jul. 19, 2021), https://builtin.com/artificial-intelligence/ai-right-explanation.
Other questions abound. Maybe we should think about the enormous power exercised by platforms like Facebook and YouTube as a result of "network effects." Cf. Ohio v. American Express Co., 585 U.S. 529 (2018). And maybe we should think about the unique ways in which social-media platforms influence public thought. To be sure, I do not suggest that we should decide at this time whether the Florida and Texas laws are constitutional as applied to Facebook's News Feed or YouTube's homepage. My argument is just the opposite. Such questions should be resolved in the context of an as-applied challenge. But no as-applied question is before us, and we do not have all the facts that we need to tackle the extraneous matters reached by the majority.
Instead, when confronted with the application of a constitutional requirement to new technology, we should proceed with caution. While the meaning of the Constitution remains constant, the application of enduring principles to new technology requires an understanding of that technology and its effects. Premature resolution of such questions creates the risk of decisions that will quickly turn into embarrassments.
IV
Just as NetChoice failed to make the showing necessary to demonstrate that the States' content-moderation provisions are facially unconstitutional, NetChoice's facial attacks on the individual-disclosure provisions also fell short. Those provisions require platforms to explain to affected users the basis of each content-censorship decision. Because these regulations provide for the disclosure of "purely factual and uncontroversial information," they must be reviewed under Zauderer's framework, which requires only that such laws be "reasonably related to the State's interest in preventing deception of consumers" and not "unduly burde[n]" speech. 471 U.S., at 651.
Both lower courts reviewed these provisions under the Zauderer test. And in the Florida case in particular, NetChoice did not contest-and accordingly forfeited-whether Zauderer applies here. See Brief for Appellants in No. 21-12355 (CA11), at 21; Brief for Appellees in No. 2112355 (CA11), p. 44.
For Zauderer purposes, a law is "unduly burdensome" if it threatens to "chil[l] protected commercial speech." Ibid. Here, NetChoice claims that these disclosures have that effect and lead platforms to "conclude that the safe course is to . . . not exercis[e] editorial discretion at all" rather than explain why they remove "millions of posts per day." Brief for Respondents in No. 22-277, at 39-40 (internal quotation marks omitted).
Our unanimous agreement regarding NetChoice's failure to show that a sufficient number of its members engage in constitutionally protected expression prevents us from accepting NetChoice's argument regarding these provisions. In the lower courts, NetChoice did not even try to show how these disclosure provisions chill each platform's speech. Instead, NetChoice merely identified one subset of one platform's content that would be affected by these laws: billions of nonconforming comments that YouTube removes each year. 49 F. 4th, at 487; see also Brief for Appellees in No. 21-12355 (CA11), p. 13. But if YouTube uses automated processes to flag and remove these comments, it is not clear why having to disclose the bases of those processes would chill YouTube's speech. And even if having to explain each removal decision would unduly burden YouTube's First Amendment rights, the same does not necessarily follow with regard to all of NetChoice's members.
NetChoice's failure to make this broader showing is especially problematic since NetChoice does not dispute the States' assertion that many platforms already provide a notice-and-appeal process for their removal decisions. In fact, some have even advocated for such disclosure requirements. Before its change in ownership, the previous Chief Executive Officer of the platform now known as X went as far as to say that "all companies" should be required to explain censorship decisions and "provide a straightforward process to appeal decisions made by humans or algo-rithms." Moreover, as mentioned, many platforms are already providing similar disclosures pursuant to the European Union's Digital Services Act. Yet complying with that law does not appear to have unduly burdened each platform's speech in those countries. On remand, the courts might consider whether compliance with EU law chilled the platforms' speech.
Does Section 230's Sweeping Immunity Enable Big Tech Bad Behavior? Hearing before the Senate Committee on Commerce, Science, and Transportation, 116th Cong., 2d Sess., 2 (2020) (statement of Jack Dorsey, CEO, Twitter, Inc.).
* * *
The only binding holding in these decisions is that NetChoice has yet to prove that the Florida and Texas laws they challenged are facially unconstitutional. Because the majority opinion ventures far beyond the question we must decide, I concur only in the judgment.
[*] Together with No. 22-555, NetChoice, LLC, dba NetChoice, et al. v. Paxton, Attorney General of Texas, on certiorari to the United States Court of Appeals for the Fifth Circuit.
[*]JUSTICE JACKSON joins Parts I, II, and III-A of this opinion.