Summary
holding that the First Amendment protects a social media platform's right to moderate user-generated content as it sees fit
Summary of this case from Meta Platforms, Inc. v. Dist. of ColumbiaOpinion
No. 21-12355
05-23-2022
Paul D. Clement, Winn Allen, Kasdin Miller Mitchell, James Xi, Attorney, Kirkland & Ellis, LLP, Washington, DC, Evelyn Blacklock, Kirkland & Ellis, LLP, New York, NY, Ilana Hope Eisenstein, Ben Fabens-Lassen, Jonathan Allen Green, Danielle T. Morrison, DLA Piper LLP (US), Philadelphia, PA, Steffen Nathanael Johnson, Wilson Sonsini Goodrich & Rosati, PC, Washington, DC, Peter Karanjia, DLA Piper LLP (US), Washington, DC, Christopher G. Oprison, DLA Piper, LLP (US), Miami, FL, Joseph Trumon Phillips, DLA Piper LLP (US), Tampa, FL, Lauren Gallo White, Wilson Sonsini Goodrich & Rosati, PC, San Francisco, CA, Brian M. Willen, Wilson Sonsini Goodrich & Rosati, New York, NY, Meng Jia Yang, Wilson Sonsini Goodrich & Rosati, PC, Palo Alto, CA, for Plaintiff - Appellee NetChoice LLC. Paul D. Clement, Winn Allen, Kasdin Miller Mitchell, James Xi, Attorney, Kirkland & Ellis, LLP, Washington, DC, Evelyn Blacklock, Kirkland & Ellis, LLP, New York, NY, Ilana Hope Eisenstein, Ben Fabens-Lassen, Jonathan Allen Green, Danielle T. Morrison, DLA Piper LLP (US), Philadelphia, PA, Steffen Nathanael Johnson, Wilson Sonsini Goodrich & Rosati, PC, Washington, DC, Peter Karanjia, DLA Piper LLP (US), Washington, DC, Christopher G. Oprison, DLA Piper, LLP (US), Miami, FL, Joseph Trumon Phillips, DLA Piper LLP (US), Tampa, FL, Lauren Gallo White, Wilson Sonsini Goodrich & Rosati, PC, San Francisco, CA, Brian M. Willen, Wilson Sonsini Goodrich & Rosati, New York, NY, Meng Jia Yang, Wilson Sonsini Goodrich & Rosati, PC, Palo Alto, CA, Glenn Thomas Burhans, Jr., Christopher Roy Clark, Stearns Weaver Miller Weissler Alhadeff & Sitterson, PA, Tallahassee, FL, Douglas Lamar Kilby, Ausley & McMullen, PA, Tallahassee, FL, Bridget Kellogg Smitha, Greenberg Traurig, PA, Tallahassee, FL, for Plaintiff - Appellee Computer & Communications Industry Association. Florida Attorney General Service, Daniel William Bell, Henry Charles Whitaker, Office of the Attorney General, Tallahassee, FL, for Defendant - Appellant Attorney General, State of Florida. Florida Attorney General Service, Daniel William Bell, Henry Charles Whitaker, Office of the Attorney General, Tallahassee, FL, Evan Matthew Ezray, Attorney General's Office, West Palm Beach, FL, for Defendants - Appellants Joni Alexis Poitier, Jason Todd Allen, John Martin Hayes and Kymberlee Curry Smith. Charles J. Cooper, Brian W. Barnes, Joseph Masterman, David H. Thompson, John Tienken, Cooper & Kirk, PLLC, Washington, DC, Daniel William Bell, Henry Charles Whitaker, Office of the Attorney General, Tallahassee, FL, Raymond Treadwell, James Uthmeier, Executive Office of the Governor, Tallahassee, FL, for Defendant - Appellant Deputy Secretary of Business Operations of the Florida Department of Management Services. Endel Kolde, Senior Attorney, Alan Gura, Institute for Free Speech, Washington, DC, for Amicus Curiae Institute for Free Speech. Jordan E. Pratt, First Liberty Institute, Washington, DC, for Amici Curiae The Babylon Bee, LLC and Not the Bee, LLC. William Francis Cole, Office of the Attorney General of Texas, Austin, TX, for Amici Curiae State of Texas, State of Alabama, State of Alaska, State of Arizona, State of Arkansas, State of Kentucky, State of Mississippi, State of Missouri, State of Montana and State of South Carolina. Richard Lawson, Gardner Brewer Hudson, PA, Tampa, FL, Joshua Joseph Campbell, Joshua Joseph Campbell Law Office, Savannah, GA, D. Adam Candeub, Michigan State University, East Lansing, MI, for Amicus Curiae America First Policy Institute. Corbin Barthold, TechFreedom, Washington, DC, Amicus Curiae TechFreedom. Marc J. Zwillinger, Jacob A. Sommer, ZwillGen, PLLC, Washington, DC, for Amici Curiae Chamber of Progress, Connected Commerce Council, CTA, Engine Advocacy, Information Technology and Innovation Foundation, LGBT Tech Institute, National Black Justice Coalition, TechNet, Washington Center for Technology Policy Inclusion and Center for Democracy & Technology. Jacob A. Sommer, ZwillGen, PLLC, Washington, DC, for Amicus Curiae Progressive Policy Institute. James Burges Lake, Mark R. Caramanica, Thomas & LoCicero, PL, Tampa, FL, for Amicus Curiae Center for Democracy & Technology. Ilya Shapiro, Cato Institute, Washington, DC, for Amicus Curiae Cato Institute. Deanna K. Shullman, Shullman Fugate, PLLC, Tampa, FL, for Amici Curiae The Reporters Committee for Freedom of the Press, American Booksellers Association, American Civil Liberties Union, American Civil Liberties Union of Florida, The Authors Guild, Inc., The Media Coalition Foundation, Inc., The Media Law Resource Center and Pen American Center, Inc. David Greene, Electronic Frontier Foundation, San Francisco, CA, Christopher Benton Hopkins, McDonald Hopkins, LLC, West Palm Beach, FL, for Amicus Curiae Electronic Frontier Foundation. Christopher Benton Hopkins, McDonald Hopkins, LLC, West Palm Beach, FL, for Amicus Curiae Protect Democracy Project, Inc. Patrick Carome, Ari Holtzblatt, Wilmer Cutler Pickering Hale & Dorr, LLP, Washington, DC, for Amicus Curiae Internet Association. Scott B. Wilkens, Knight First Amendment Institute at Columbia University, New York, NY, for Amicus Curiae The Knight First Amendment Institute at Columbia University. Allonn Emanuel Levy, Hopkins & Carley, San Jose, CA, Robin Gross, Law Office of Robin Gross, San Francisco, CA, for Amicus Curiae IP Justice. Jackson Roger Sharman, III, Jeff P. Doss, Jonathan R. Little, Lightfoot Franklin & White, LLC, Birmingham, AL, for Amicus Curiae Christopher Cox. Catherine R. Gellis, Catherine R. Gellis, Esq., Sausalito, CA, for Amicus Curiae Floor64, Inc. Gautam Hans, Vanderbilt Legal Clinic, Nashville, TN, for Amicus Curiae First Amendment Law Professors.
Paul D. Clement, Winn Allen, Kasdin Miller Mitchell, James Xi, Attorney, Kirkland & Ellis, LLP, Washington, DC, Evelyn Blacklock, Kirkland & Ellis, LLP, New York, NY, Ilana Hope Eisenstein, Ben Fabens-Lassen, Jonathan Allen Green, Danielle T. Morrison, DLA Piper LLP (US), Philadelphia, PA, Steffen Nathanael Johnson, Wilson Sonsini Goodrich & Rosati, PC, Washington, DC, Peter Karanjia, DLA Piper LLP (US), Washington, DC, Christopher G. Oprison, DLA Piper, LLP (US), Miami, FL, Joseph Trumon Phillips, DLA Piper LLP (US), Tampa, FL, Lauren Gallo White, Wilson Sonsini Goodrich & Rosati, PC, San Francisco, CA, Brian M. Willen, Wilson Sonsini Goodrich & Rosati, New York, NY, Meng Jia Yang, Wilson Sonsini Goodrich & Rosati, PC, Palo Alto, CA, for Plaintiff - Appellee NetChoice LLC.
Paul D. Clement, Winn Allen, Kasdin Miller Mitchell, James Xi, Attorney, Kirkland & Ellis, LLP, Washington, DC, Evelyn Blacklock, Kirkland & Ellis, LLP, New York, NY, Ilana Hope Eisenstein, Ben Fabens-Lassen, Jonathan Allen Green, Danielle T. Morrison, DLA Piper LLP (US), Philadelphia, PA, Steffen Nathanael Johnson, Wilson Sonsini Goodrich & Rosati, PC, Washington, DC, Peter Karanjia, DLA Piper LLP (US), Washington, DC, Christopher G. Oprison, DLA Piper, LLP (US), Miami, FL, Joseph Trumon Phillips, DLA Piper LLP (US), Tampa, FL, Lauren Gallo White, Wilson Sonsini Goodrich & Rosati, PC, San Francisco, CA, Brian M. Willen, Wilson Sonsini Goodrich & Rosati, New York, NY, Meng Jia Yang, Wilson Sonsini Goodrich & Rosati, PC, Palo Alto, CA, Glenn Thomas Burhans, Jr., Christopher Roy Clark, Stearns Weaver Miller Weissler Alhadeff & Sitterson, PA, Tallahassee, FL, Douglas Lamar Kilby, Ausley & McMullen, PA, Tallahassee, FL, Bridget Kellogg Smitha, Greenberg Traurig, PA, Tallahassee, FL, for Plaintiff - Appellee Computer & Communications Industry Association.
Florida Attorney General Service, Daniel William Bell, Henry Charles Whitaker, Office of the Attorney General, Tallahassee, FL, for Defendant - Appellant Attorney General, State of Florida.
Florida Attorney General Service, Daniel William Bell, Henry Charles Whitaker, Office of the Attorney General, Tallahassee, FL, Evan Matthew Ezray, Attorney General's Office, West Palm Beach, FL, for Defendants - Appellants Joni Alexis Poitier, Jason Todd Allen, John Martin Hayes and Kymberlee Curry Smith.
Charles J. Cooper, Brian W. Barnes, Joseph Masterman, David H. Thompson, John Tienken, Cooper & Kirk, PLLC, Washington, DC, Daniel William Bell, Henry Charles Whitaker, Office of the Attorney General, Tallahassee, FL, Raymond Treadwell, James Uthmeier, Executive Office of the Governor, Tallahassee, FL, for Defendant - Appellant Deputy Secretary of Business Operations of the Florida Department of Management Services.
Endel Kolde, Senior Attorney, Alan Gura, Institute for Free Speech, Washington, DC, for Amicus Curiae Institute for Free Speech.
Jordan E. Pratt, First Liberty Institute, Washington, DC, for Amici Curiae The Babylon Bee, LLC and Not the Bee, LLC.
William Francis Cole, Office of the Attorney General of Texas, Austin, TX, for Amici Curiae State of Texas, State of Alabama, State of Alaska, State of Arizona, State of Arkansas, State of Kentucky, State of Mississippi, State of Missouri, State of Montana and State of South Carolina.
Richard Lawson, Gardner Brewer Hudson, PA, Tampa, FL, Joshua Joseph Campbell, Joshua Joseph Campbell Law Office, Savannah, GA, D. Adam Candeub, Michigan State University, East Lansing, MI, for Amicus Curiae America First Policy Institute.
Corbin Barthold, TechFreedom, Washington, DC, Amicus Curiae TechFreedom.
Marc J. Zwillinger, Jacob A. Sommer, ZwillGen, PLLC, Washington, DC, for Amici Curiae Chamber of Progress, Connected Commerce Council, CTA, Engine Advocacy, Information Technology and Innovation Foundation, LGBT Tech Institute, National Black Justice Coalition, TechNet, Washington Center for Technology Policy Inclusion and Center for Democracy & Technology.
Jacob A. Sommer, ZwillGen, PLLC, Washington, DC, for Amicus Curiae Progressive Policy Institute.
James Burges Lake, Mark R. Caramanica, Thomas & LoCicero, PL, Tampa, FL, for Amicus Curiae Center for Democracy & Technology.
Ilya Shapiro, Cato Institute, Washington, DC, for Amicus Curiae Cato Institute.
Deanna K. Shullman, Shullman Fugate, PLLC, Tampa, FL, for Amici Curiae The Reporters Committee for Freedom of the Press, American Booksellers Association, American Civil Liberties Union, American Civil Liberties Union of Florida, The Authors Guild, Inc., The Media Coalition Foundation, Inc., The Media Law Resource Center and Pen American Center, Inc.
David Greene, Electronic Frontier Foundation, San Francisco, CA, Christopher Benton Hopkins, McDonald Hopkins, LLC, West Palm Beach, FL, for Amicus Curiae Electronic Frontier Foundation.
Christopher Benton Hopkins, McDonald Hopkins, LLC, West Palm Beach, FL, for Amicus Curiae Protect Democracy Project, Inc.
Patrick Carome, Ari Holtzblatt, Wilmer Cutler Pickering Hale & Dorr, LLP, Washington, DC, for Amicus Curiae Internet Association.
Scott B. Wilkens, Knight First Amendment Institute at Columbia University, New York, NY, for Amicus Curiae The Knight First Amendment Institute at Columbia University.
Allonn Emanuel Levy, Hopkins & Carley, San Jose, CA, Robin Gross, Law Office of Robin Gross, San Francisco, CA, for Amicus Curiae IP Justice.
Jackson Roger Sharman, III, Jeff P. Doss, Jonathan R. Little, Lightfoot Franklin & White, LLC, Birmingham, AL, for Amicus Curiae Christopher Cox.
Catherine R. Gellis, Catherine R. Gellis, Esq., Sausalito, CA, for Amicus Curiae Floor64, Inc.
Gautam Hans, Vanderbilt Legal Clinic, Nashville, TN, for Amicus Curiae First Amendment Law Professors.
Before Newsom, Tjoflat, and Ed Carnes, Circuit Judges.
Newsom, Circuit Judge: Not in their wildest dreams could anyone in the Founding generation have imagined Facebook, Twitter, YouTube, or TikTok. But "whatever the challenges of applying the Constitution to ever-advancing technology, the basic principles of freedom of speech and the press, like the First Amendment's command, do not vary when a new and different medium for communication appears." Brown v. Ent. Merchs. Ass'n , 564 U.S. 786, 790, 131 S.Ct. 2729, 180 L.Ed.2d 708 (2011) (quotation marks omitted). One of those "basic principles"—indeed, the most basic of the basic—is that "[t]he Free Speech Clause of the First Amendment constrains governmental actors and protects private actors." Manhattan Cmty. Access Corp. v. Halleck , ––– U.S. ––––, 139 S. Ct. 1921, 1926, 204 L.Ed.2d 405 (2019). Put simply, with minor exceptions, the government can't tell a private person or entity what to say or how to say it.
The question at the core of this appeal is whether the Facebooks and Twitters of the world—indisputably "private actors" with First Amendment rights—are engaged in constitutionally protected expressive activity when they moderate and curate the content that they disseminate on their platforms. The State of Florida insists that they aren't, and it has enacted a first-of-its-kind law to combat what some of its proponents perceive to be a concerted effort by "the ‘big tech’ oligarchs in Silicon Valley" to "silenc[e]" "conservative" speech in favor of a "radical leftist" agenda. To that end, the new law would, among other things, prohibit certain social-media companies from "deplatforming" political candidates under any circumstances, prioritizing or deprioritizing any post or message "by or about" a candidate, and, more broadly, removing anything posted by a "journalistic enterprise" based on its content.
We hold that it is substantially likely that social-media companies—even the biggest ones—are "private actors" whose rights the First Amendment protects, Manhattan Cmty. , 139 S. Ct. at 1926, that their so-called "content-moderation" decisions constitute protected exercises of editorial judgment, and that the provisions of the new Florida law that restrict large platforms’ ability to engage in content moderation unconstitutionally burden that prerogative. We further conclude that it is substantially likely that one of the law's particularly onerous disclosure provisions—which would require covered platforms to provide a "thorough rationale" for each and every content-moderation decision they make—violates the First Amendment. Accordingly, we hold that the companies are entitled to a preliminary injunction prohibiting enforcement of those provisions. Because we think it unlikely that the law's remaining (and far less burdensome) disclosure provisions violate the First Amendment, we hold that the companies are not entitled to preliminary injunctive relief with respect to them.
I
A
We begin with a primer: This is a case about social-media platforms. (If you're one of the millions of Americans who regularly use social media or can't remember a time before social media existed, feel free to skip ahead.)
At their core, social-media platforms collect speech created by third parties—typically in the form of written text, photos, and videos, which we'll collectively call "posts"—and then make that speech available to others, who might be either individuals who have chosen to "follow" the "post"-er or members of the general public. Social-media platforms include both massive websites with billions of users—like Facebook, Twitter, YouTube, and TikTok—and niche sites that cater to smaller audiences based on specific interests or affiliations—like Roblox (a child-oriented gaming network), ProAmericaOnly (a network for conservatives), and Vegan Forum (self-explanatory).
Three important points about social-media platforms: First—and this would be too obvious to mention if it weren't so often lost or obscured in political rhetoric—platforms are private enterprises, not governmental (or even quasi-governmental) entities. No one has an obligation to contribute to or consume the content that the platforms make available. And correlatively, while the Constitution protects citizens from governmental efforts to restrict their access to social media, see Packingham v. North Carolina , ––– U.S. ––––, 137 S. Ct. 1730, 1737, 198 L.Ed.2d 273 (2017), no one has a vested right to force a platform to allow her to contribute to or consume social-media content.
Second, a social-media platform is different from traditional media outlets in that it doesn't create most of the original content on its site; the vast majority of "tweets" on Twitter and videos on YouTube, for instance, are created by individual users, not the companies that own and operate Twitter and YouTube. Even so, platforms do engage in some speech of their own: A platform, for example, might publish terms of service or community standards specifying the type of content that it will (and won't) allow on its site, add addenda or disclaimers to certain posts (say, warning of misinformation or mature content), or publish its own posts.
Third, and relatedly, social-media platforms aren't "dumb pipes": They're not just servers and hard drives storing information or hosting blogs that anyone can access, and they're not internet service providers reflexively transmitting data from point A to point B. Rather, when a user visits Facebook or Twitter, for instance, she sees a curated and edited compilation of content from the people and organizations that she follows. If she follows 1,000 people and 100 organizations on a particular platform, for instance, her "feed"—for better or worse—won't just consist of every single post created by every single one of those people and organizations arranged in reverse-chronological order. Rather, the platform will have exercised editorial judgment in two key ways: First, the platform will have removed posts that violate its terms of service or community standards—for instance, those containing hate speech, pornography, or violent content. See, e.g. , Doc. 26-1 at 3–6; Facebook Community Standards , Meta, https://transparency.fb.com/policies/community-standards (last accessed May 15, 2022). Second, it will have arranged available content by choosing how to prioritize and display posts—effectively selecting which users’ speech the viewer will see, and in what order, during any given visit to the site. See Doc. 26-1 at 3.
Accordingly, a social-media platform serves as an intermediary between users who have chosen to partake of the service the platform provides and thereby participate in the community it has created. In that way, the platform creates a virtual space in which every user—private individuals, politicians, news organizations, corporations, and advocacy groups—can be both speaker and listener. In playing this role, the platforms invest significant time and resources into editing and organizing—the best word, we think, is curating —users’ posts into collections of content that they then disseminate to others. By engaging in this content moderation, the platforms develop particular market niches, foster different sorts of online communities, and promote various values and viewpoints.
B
The State of Florida enacted S.B. 7072—in the words of the Act's sponsor, as quoted in Governor DeSantis's signing statement—to combat the "biased silencing" of "our freedom of speech as conservatives ... by the ‘big tech’ oligarchs in Silicon Valley." News Release: Governor Ron DeSantis Signs Bill to Stop the Censorship of Floridians by Big Tech (May 24, 2021). The bill, the Governor explained, was passed to take "action to ensure that ‘We the People’—real Floridians across the Sunshine State—are guaranteed protection against the Silicon Valley elites" and to check the "Big Tech censors" that "discriminate in favor of the dominant Silicon Valley ideology." Id. By signing the bill, the Governor sought to "fight[ ] against [the] big tech oligarchs that contrive, manipulate, and censor if you voice views that run contrary to their radical leftist narrative." Id.
See https://www.flgov.com/2021/05/24/governor-ron-desantis-signs-bill-to-stop-the-censorship-of-floridians-by-big-tech.
S.B. 7072's enacted findings are more measured. They assert that private social-media platforms are important "in preserving first amendment protections for all Floridians" and, comparing platforms to "public utilities," argue that they should be "treated similarly to common carriers." S.B. 7072 § 1(5), (6). That, the Act says, is because social-media platforms "have unfairly censored, shadow banned, deplatformed, and applied post-prioritization algorithms to Floridians" and because "[t]he state has a substantial interest in protecting its residents from inconsistent and unfair actions" by the platforms. Id. § 1(9), (10).
To these ends, S.B. 7072 contains several new statutory provisions that apply to "social media platforms." The term "social media platform" is defined using size and revenue thresholds that appear to target the "big tech oligarchs" about whose "narrative" and "ideology" the bill's sponsor and Governor DeSantis had complained. Even so, the definition's broad conception of what a "social media platform" does may well sweep in other popular websites, like the crowdsourced reference tool Wikipedia and virtual handmade craft-market Etsy:
[A]ny information service, system, Internet search engine, or access software provider that:
1. Provides or enables computer access by multiple users to a computer server, including an Internet platform or a social media site;
2. Operates as a sole proprietorship, partnership, limited liability company, corporation, association, or other legal entity;
3. Does business in the state; and
4. Satisfies at least one of the following thresholds:
a. Has annual gross revenues in excess of $100 million ...
b. Has at least 100 million monthly individual platform participants globally.
Fla. Stat. § 501.2041(1)(g). As originally enacted, the law's definition of "social media platform" expressly excluded any platform "operated by a company that owns and operates a theme park or entertainment complex." Id. But after the onset of this litigation—and after Disney executives made public comments critical of another recently enacted Florida law—the State repealed S.B. 7072's theme-park-company exemption. See S.B. 6-C (2022).
The relevant provisions of S.B. 7072—which are codified at Fla. Stat. §§ 106.072 and 501.2041 —can be divided into three categories: (1) content-moderation restrictions; (2) disclosure obligations; and (3) a user-data requirement.
While S.B. 7072 also enacted antitrust-related provisions, only §§ 106.072 and 501.2041 are at issue in this appeal.
Content-Moderation Restrictions
• Candidate deplatforming : A social-media platform "may not willfully deplatform a candidate for office." Fla. Stat. § 106.072(2). The term "deplatform" is defined to mean "the action or practice by a social media platform to permanently delete or ban a user or to temporarily delete or ban a user from the social media platform for more than 14 days." Id. § 501.2041(1)(c).
• Posts by or about candidates : "A social media platform may not apply or use post-prioritization or shadow banning algorithms for content and material posted by or about ... a candidate." Id. § 501.2041(2)(h). "Post prioritization" refers to the practice of arranging certain content in a more or less prominent position in a user's feed or search results. Id. § 501.2041(1)(e). "Shadow banning" refers to any action to "limit or eliminate the exposure of a user or content or material posted by a user to other users of [a] ... platform." Id. § 501.2041(1)(f).
For purposes of this appeal, the State does not defend the Act's post-prioritization provisions.
• "Journalistic enterprises" : A social-media platform may not "censor, deplatform, or shadow ban a journalistic enterprise based on the content of its publication or broadcast." Id. § 501.2041(2)(j). The term "journalistic enterprise" is defined broadly to include any entity doing business in Florida that either (1) publishes in excess of 100,000 words online and has at least 50,000 paid subscribers or 100,000 monthly users, (2) publishes 100 hours of audio or video online and has at least 100 million annual viewers, (3) operates a cable channel that provides more than 40 hours of content per week to more than 100,000 cable subscribers, or (4) operates under an FCC broadcast license. Id. § 501.2041(1)(d). The term "censor" is also defined broadly to include not only actions taken to "delete," "edit," or "inhibit the publication of" content, but also any effort to "post an addendum to any content or material." Id. § 501.2041(1)(b). The only exception to this provision's prohibition is for "obscene" content. Id. § 501.2041(2)(j).
• Consistency : A social-media platform must "apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform." Id. § 501.2041(2)(b). The Act does not define the term "consistent."
• 30-day restriction : A platform may not make changes to its "user rules, terms, and agreements ... more than once every 30 days." Id. § 501.2041(2)(c).
• User opt-out : A platform must "categorize" its post-prioritization and shadow-banning algorithms and allow users to opt out of them; for users who opt out, the platform must display material in "sequential or chronological" order. Id. § 501.2041(2)(f). The platform must offer users the opportunity to opt out annually. Id. § 501.2041(2)(g).
Disclosure Obligations
• Standards : A social-media platform must "publish the standards, including detailed definitions, it uses or has used for determining how to censor, deplatform, and shadow ban." Id. § 501.2041(2)(a).
• Rule changes : A platform must inform its users "about any changes to" its "rules, terms, and agreements before implementing the changes." Id. § 501.2041(2)(c).
• View counts : Upon request, a platform must provide a user with the number of others who viewed that user's content or posts. Id. § 501.2041(2)(e).
• Candidate free advertising : Platforms that "willfully provide[ ] free advertising for a candidate must inform the candidate of such in-kind contribution." Id. § 106.072(4).
• Explanations : Before a social-media platform deplatforms, censors, or shadow-bans any user, it must provide the user with a detailed notice. Id. § 501.2041(2)(d). In particular, the notice must be in writing and be delivered within 7 days, and must include both a "thorough rationale explaining the reason" for the "censor[ship]" and a "precise and thorough explanation of how the social media platform became aware" of the content that triggered its decision. Id. § 501.2041(3). (The notice requirement doesn't apply "if the censored content or material is obscene." Id. § 501.2041(4).)
User-Data Requirement
• Data access : A social-media platform must allow a deplatformed user to "access or retrieve all of the user's information, content, material, and data for at least 60 days" after the user receives notice of deplatforming. Id. § 501.2041(2)(i).
Enforcement of § 106.072—which contains the candidate-deplatforming provision—falls to the Florida Elections Commission, which is empowered to impose fines of up to $250,000 per day for violations involving candidates for statewide office and $25,000 per day for those involving candidates for other offices. Id. § 106.072(3). Section 501.2041—which contains S.B. 7072's remaining provisions—may be enforced either by state governmental actors or through civil suits filed by private parties. Id. § 501.2041(5), (6). Private actions under this section can yield up to $100,000 in statutory damages per claim, actual damages, punitive damages, equitable relief, and, in some instances, attorneys’ fees. Id. § 501.2041(6).
C
The plaintiffs here—NetChoice and the Computer & Communications Industry Association (together, "NetChoice")—are trade associations that represent internet and social-media companies like Facebook, Twitter, Google (which owns YouTube), and TikTok. They sued the Florida officials charged with enforcing S.B. 7072 under 42 U.S.C. § 1983. In particular, they sought to enjoin enforcement of §§ 106.072 and 501.2041 on a number of grounds, including, as relevant here, that the law's provisions (1) violate the social-media companies’ right to free speech under the First Amendment and (2) are preempted by federal law.
The district court granted NetChoice's motion and preliminarily enjoined enforcement of §§ 106.072 and 501.2041 in their entirety. The court held that the provisions that impose liability for platforms’ decisions to remove or deprioritize content are likely preempted by 47 U.S.C. § 230(c)(2), which states that "[n]o provider or user of an interactive computer service shall be held liable on account of ... any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."
On NetChoice's free-speech challenge, the district court held that the Act's provisions implicated the First Amendment because they restrict platforms’ constitutionally protected exercise of "editorial judgment." The court then applied strict First Amendment scrutiny because it concluded that some of the Act's provisions were content-based and, more broadly, because it found that the entire bill was motivated by the state's viewpoint-based purpose to defend conservatives’ speech from perceived liberal "big tech" bias: "This viewpoint-based motivation, without more, subjects the legislation to strict scrutiny, root and branch." Doc. 113 at 23–26. The court held that the Act's provisions "come nowhere close" to surviving strict scrutiny because, it said, "leveling the playing field" for speech is not a legitimate state interest, the provisions aren't narrowly tailored, and the State hadn't even argued that the provisions could survive such scrutiny. Id. at 27. The court further noted that even if more permissive intermediate scrutiny applied, the provisions wouldn't survive because they don't meet the narrow-tailoring requirement and instead "seem designed not to achieve any governmental interest but to impose the maximum available burden on the social media platforms." Id. at 28. The court concluded that the plaintiffs easily met the remaining requirements for a preliminary injunction.
The State appealed. Before us, the State first argues that the plaintiffs are unlikely to succeed on their preemption challenge because some applications of the Act are consistent with § 230. Second, and more importantly for our purposes, the State contends that S.B. 7072 doesn't even implicate —let alone violate—the First Amendment because the platforms aren't engaged in protected speech. Rather, the State asserts that the Act merely requires platforms to "host" third-parties’ speech, which, it says, they may constitutionally be compelled to do under two Supreme Court decisions— PruneYard Shopping Center v. Robins , 447 U.S. 74, 100 S.Ct. 2035, 64 L.Ed.2d 741 (1980), and Rumsfeld v. Forum for Academic & Institutional Rights, Inc. , 547 U.S. 47, 126 S.Ct. 1297, 164 L.Ed.2d 156 (2006). Alternatively, the State says, the Act doesn't trigger First Amendment scrutiny because it reflects the State's permissible decision to treat social-media platforms like "common carriers."
NetChoice responds that platforms’ content-moderation decisions—i.e. , their decisions to remove or deprioritize posts or deplatform users, and thereby curate the material they disseminate—are "editorial judgments" that are protected by the First Amendment under longstanding Supreme Court precedent, including Miami Herald Publishing Co. v. Tornillo , 418 U.S. 241, 94 S.Ct. 2831, 41 L.Ed.2d 730 (1974), Pacific Gas & Electric Co. v. Public Utilities Commission of California , 475 U.S. 1, 106 S.Ct. 903, 89 L.Ed.2d 1 (1986), Turner Broadcasting Systems, Inc. v. FCC , 512 U.S. 622, 114 S.Ct. 2445, 129 L.Ed.2d 497 (1994), and Hurley v. Irish-American Gay, Lesbian & Bisexual Group of Boston , 515 U.S. 557, 115 S.Ct. 2338, 132 L.Ed.2d 487 (1995). According to NetChoice, strict scrutiny applies to the entire law "several times over" because it is speaker-, content-, and viewpoint-based. Moreover, and in any event, NetChoice says, the law fails any form of heightened scrutiny because there is no legitimate state interest in equalizing speech and because the law isn't narrowly tailored. NetChoice briefly defends the district court's preemption holding, but focuses on the First Amendment issues because they fully dispose of the case and because, it contends, a First Amendment violation is a quintessential irreparable injury for injunctive-relief purposes.
D
"We review the grant of a preliminary injunction for abuse of discretion, reviewing any underlying legal conclusions de novo and any findings of fact for clear error." Gonzalez v. Governor of Ga. , 978 F.3d 1266, 1270 (11th Cir. 2020). Ordinarily, "[a] district court may grant injunctive relief only if the moving party shows that: (1) it has a substantial likelihood of success on the merits; (2) irreparable injury will be suffered unless the injunction issues; (3) the threatened injury to the movant outweighs whatever damage the proposed injunction may cause the opposing party; and (4) if issued, the injunction would not be adverse to the public interest." Siegel v. LePore , 234 F.3d 1163, 1176 (11th Cir. 2000) (en banc). Likelihood of success on the merits "is generally the most important" factor. Gonzalez , 978 F.3d at 1271 n.12 (quotation marks omitted).
* * *
We will train our attention on the question whether NetChoice has shown a substantial likelihood of success on the merits of its First Amendment challenge to Fla. Stat. §§ 106.072 and 501.2041. Because we conclude that the Act's content-moderation restrictions are substantially likely to violate the First Amendment, and because that conclusion fully disposes of the appeal, we needn't reach the merits of the plaintiffs’ preemption challenge.
The only provisions that NetChoice challenges as preempted are, for reasons we'll explain, also substantially likely to violate the First Amendment. Of course, federal courts should generally "avoid reaching constitutional questions if there are other grounds upon which a case can be decided," but that rule applies only when "a dispositive nonconstitutional ground is available." Otto v. City of Boca Raton , 981 F.3d 854, 871 (11th Cir. 2020) (quotation marks and emphasis omitted). Here, whether or not the preemption ground is "dispositive," but cf. id. , it isn't "nonconstitutional" because federal preemption is rooted in the Supremacy Clause of Article VI, see La. Pub. Serv. Comm'n v. FCC , 476 U.S. 355, 368, 106 S.Ct. 1890, 90 L.Ed.2d 369 (1986).
In assessing whether the Act likely violates the First Amendment, we must initially consider whether it triggers First Amendment scrutiny in the first place—i.e. , whether it regulates "speech" within the meaning of the Amendment at all. See Coral Ridge Ministries Media, Inc. v. Amazon.com, Inc. , 6 F.4th 1247, 1254 (11th Cir. 2021). In other words, we must determine whether social-media platforms engage in First-Amendment-protected activity. If they do, we must then proceed to determine what level of scrutiny applies and whether the Act's provisions survive that scrutiny. See Fort Lauderdale Food Not Bombs v. City of Fort Lauderdale , 11 F.4th 1266, 1291 (11th Cir. 2021) (" FLFNB II ").
For reasons we will explain in the balance of the opinion, we hold as follows: (1) S.B. 7072 triggers First Amendment scrutiny because it restricts social-media platforms’ exercise of editorial judgment and requires them to make certain disclosures; (2) strict scrutiny applies to some of the Act's content-moderation restrictions while intermediate scrutiny applies to others; (3) the Act's disclosure provisions should be assessed under the standard articulated in Zauderer v. Office of Disciplinary Counsel , 471 U.S. 626, 105 S.Ct. 2265, 85 L.Ed.2d 652 (1985) ; (4) it is substantially likely that the Act's content-moderation restrictions will not survive even intermediate scrutiny; (5) it is also substantially likely that the requirement that platforms provide a "thorough rationale" for each content-moderation decision will not survive under Zauderer ; (6) it is not substantially likely that the Act's remaining disclosure provisions are unconstitutional; and (7) the preliminary-injunction factors favor enjoining the provisions of the Act that are substantially likely to be unconstitutional.
II
A
Social-media platforms like Facebook, Twitter, YouTube, and TikTok are private companies with First Amendment rights, see First Nat'l Bank of Bos. v. Bellotti , 435 U.S. 765, 781–84, 98 S.Ct. 1407, 55 L.Ed.2d 707 (1978), and when they (like other entities) "disclos[e]," "publish[ ]," or "disseminat[e]" information, they engage in "speech within the meaning of the First Amendment," Sorrell v. IMS Health Inc. , 564 U.S. 552, 570, 131 S.Ct. 2653, 180 L.Ed.2d 544 (2011) (quotation marks omitted). More particularly, when a platform removes or deprioritizes a user or post, it makes a judgment about whether and to what extent it will publish information to its users—a judgment rooted in the platform's own views about the sorts of content and viewpoints that are valuable and appropriate for dissemination on its site. As the officials who sponsored and signed S.B. 7072 recognized when alleging that "Big Tech" companies harbor a "leftist" bias against "conservative" perspectives, the companies that operate social-media platforms express themselves (for better or worse) through their content-moderation decisions. When a platform selectively removes what it perceives to be incendiary political rhetoric, pornographic content, or public-health misinformation, it conveys a message and thereby engages in "speech" within the meaning of the First Amendment.
Laws that restrict platforms’ ability to speak through content moderation therefore trigger First Amendment scrutiny. Two lines of precedent independently confirm this commonsense conclusion: first, and most obviously, decisions protecting exercises of "editorial judgment"; and second, and separately, those protecting inherently expressive conduct.
1
We'll begin with the editorial-judgment cases. The Supreme Court has repeatedly held that a private entity's choices about whether, to what extent, and in what manner it will disseminate speech—even speech created by others—constitute "editorial judgments" protected by the First Amendment.
Miami Herald Publishing Co. v. Tornillo is the pathmarking case. There, the Court held that a newspaper's decisions about what content to publish and its "treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment" that the First Amendment was designed to safeguard. 418 U.S. at 258, 94 S.Ct. 2831. Florida had passed a statute requiring any paper that ran a piece critical of a political candidate to give the candidate equal space in its pages to reply. Id. at 243, 94 S.Ct. 2831. Despite the contentions (1) that economic conditions had created "vast accumulations of unreviewable power in the modern media empires" and (2) that those conditions had resulted in "bias and manipulative reportage" and massive barriers to entry, the Court concluded that the state's attempt to compel the paper's editors to "publish that which reason tells them should not be published is unconstitutional." Id. at 250–51, 256, 94 S.Ct. 2831 (quotation marks omitted). Florida's "intrusion into the function of editors," the Court held, was barred by the First Amendment. Id. at 258, 94 S.Ct. 2831.
The Court subsequently extended Miami Herald ’s protection of editorial judgment beyond newspapers. In Pacific Gas & Electric Co. v. Public Utilities Commission of California , the Court invalidated a state agency's order that would have required a utility company to include in its billing envelopes the speech of a third party with which the company disagreed. 475 U.S. at 4, 20, 106 S.Ct. 903 (plurality op.). A plurality of the Court reasoned that the concerns underlying Miami Herald applied to a utility company in the same way that they did to the institutional press. Id. at 11–12, 106 S.Ct. 903. The challenged order required the company "to use its property as a vehicle for spreading a message with which it disagree[d]" and therefore was subject to (and failed) strict First Amendment scrutiny. Id. at 17–21, 106 S.Ct. 903.
So too, in Turner Broadcasting Systems, Inc. v. FCC , the Court held that cable operators—companies that own cable lines and choose which stations to offer their customers—"engage in and transmit speech." 512 U.S. at 636, 114 S.Ct. 2445. "[B]y exercising editorial discretion over which stations or programs to include in [their] repertoire," the Court said, they "seek to communicate messages on a wide variety of topics and in a wide variety of formats." Id. (quotation marks omitted); see also Ark. Educ. TV Comm'n v. Forbes , 523 U.S. 666, 674, 118 S.Ct. 1633, 140 L.Ed.2d 875 (1998) ("Although programming decisions often involve the compilation of the speech of third parties, the decisions nonetheless constitute communicative acts."). Because cable operators’ decisions about which channels to transmit were protected speech, the challenged regulation requiring operators to carry broadcast-TV channels triggered First Amendment scrutiny. 512 U.S. at 637, 114 S.Ct. 2445.
In Turner , the Court applied intermediate scrutiny because the law was content-neutral. See 512 U.S. at 662, 114 S.Ct. 2445. The point for present purposes is that the Court held that the must-carry provision triggered First Amendment scrutiny.
Most recently, the Court applied the editorial-judgment principle to a parade organizer in Hurley v. Irish-American Gay, Lesbian & Bisexual Group of Boston , explaining that parades (like newspapers and cable-TV packages) constitute protected expression. 515 U.S. at 568, 115 S.Ct. 2338. The Supreme Judicial Court of Massachusetts had attempted to apply the state's public-accommodations law to require the organizers of a privately run parade to allow a gay-pride group to march. Id. at 564, 115 S.Ct. 2338. Citing Miami Herald , and using words equally applicable here, the Court observed that "the presentation of an edited compilation of speech generated by other persons ... fall[s] squarely within the core of First Amendment security" and that the "selection of contingents to make a parade is entitled to similar protection." Id. at 570, 115 S.Ct. 2338. The Court concluded that it didn't matter that the state was attempting to apply a public-accommodations statute because "once the expressive character of both the parade and the marching [gay-rights] contingent [was] understood, it bec[ame] apparent that the state courts’ application of the statute had the effect of declaring the [parade] sponsors’ speech itself to be the public accommodation," which "violates the fundamental rule of ... the First Amendment, that a speaker has the autonomy to choose the content of his own message." Id. at 573, 115 S.Ct. 2338. Nor did it matter, the Court explained, that the parade didn't produce a "particularized message": The parade organizer's decision to "exclude a message it did not like from the communication it chose to make" was "enough to invoke its right as a private speaker to shape its expression by speaking on one subject while remaining silent on another"—a choice "not to propound a particular point of view" that is "presumed to lie beyond the government's power to control." Id. at 574–75, 115 S.Ct. 2338.
Together, Miami Herald , Pacific Gas , and particularly Turner and Hurley establish that a private entity's decisions about whether, to what extent, and in what manner to disseminate third-party-created content to the public are editorial judgments protected by the First Amendment. For reasons we will explain, social-media platforms’ content-moderation decisions constitute the same sort of editorial judgments and thus trigger First Amendment scrutiny.
2
Separately, we might also assess social-media platforms’ content-moderation practices against our general standard for what constitutes inherently expressive conduct protected by the First Amendment. We recently explained that standard in Coral Ridge Ministries, Inc. v. Amazon.com, Inc. :
In determining whether conduct is expressive, we ask whether the reasonable person would interpret it as some sort of message, not whether an observer would necessarily infer a specific message. If we find that the conduct in question is expressive, any law regulating that conduct is subject to the First Amendment.
6 F.4th at 1254 (cleaned up).
In Coral Ridge , a Christian ministry and media organization sued Amazon.com, alleging that Amazon's decision to exclude the organization from the company's "AmazonSmile" charitable-giving program—based on the Southern Poverty Law Center's designation of the organization as a "hate group"—constituted religious discrimination in violation of Title II of the Civil Rights Act of 1964. Id. at 1250–51. We held that "Amazon's choice of what charities are eligible to receive donations through AmazonSmile" was expressive conduct—and notably, in so holding, we analogized Amazon's determination to the parade organizer's decisions in Hurley about which groups to include in the march. Id. at 1254–55. "A reasonable person would interpret" Amazon's exclusion of certain charities from the program based on the SPLC's hate-group designations, we said, "as Amazon conveying ‘some sort of message’ about the organizations it wishes to support." Id. (quoting Fort Lauderdale Food Not Bombs v. City of Fort Lauderdale , 901 F.3d 1235, 1240 (11th Cir. 2018) (" FLFNB I ")).
The Coral Ridge case built on our earlier decision in Fort Lauderdale Food Not Bombs . That case concerned a non-profit organization that distributed free food in a city park to communicate its view that society should end hunger and poverty by redirecting resources away from the military. 901 F.3d at 1238–39. When the city enacted an ordinance that would have prohibited distributing food in parks without prior authorization, the organization sued, arguing that its food-sharing events constituted inherently expressive conduct protected by the First Amendment. Id. at 1239–40. We held that given the surrounding context, the organization's food-sharing events would convey "some sort of message" to the reasonable observer—and were therefore " ‘a form of protected expression.’ " Id. at 1244–45 (quoting Spence v. Washington , 418 U.S. 405, 410, 94 S.Ct. 2727, 41 L.Ed.2d 842 (1974) ). 3
Whether we assess social-media platforms’ content-moderation activities against the Miami Herald line of cases or against our own decisions explaining what constitutes expressive conduct, the result is the same: Social-media platforms exercise editorial judgment that is inherently expressive. When platforms choose to remove users or posts, deprioritize content in viewers’ feeds or search results, or sanction breaches of their community standards, they engage in First-Amendment-protected activity.
Social-media platforms’ content-moderation decisions are, we think, closely analogous to the editorial judgments that the Supreme Court recognized in Miami Herald , Pacific Gas , Turner , and Hurley . Like parade organizers and cable operators, social-media companies are in the business of delivering curated compilations of speech created, in the first instance, by others. Just as the parade organizer exercises editorial judgment when it refuses to include in its lineup groups with whose messages it disagrees, and just as a cable operator might refuse to carry a channel that produces content it prefers not to disseminate, social-media platforms regularly make choices "not to propound a particular point of view." Hurley , 515 U.S. at 575, 115 S.Ct. 2338. Platforms employ editorial judgment to convey some messages but not others and thereby cultivate different types of communities that appeal to different groups. A few examples:
• YouTube seeks to create a "welcoming community for viewers" and, to that end, prohibits a wide range of content, including spam, pornography, terrorist incitement, election and public-health misinformation, and hate speech.
• Facebook engages in content moderation to foster "authenticity," "safety," "privacy," and "dignity," and accordingly, removes or adds warnings to a wide range of content—for example, posts that include what it considers to be hate speech, fraud or deception, nudity or sexual activity, and public-health misinformation.
• Twitter aims "to ensure all people can participate in the public conversation freely and safely" by removing content, among other categories, that it views as embodying hate, glorifying violence, promoting suicide, or containing election misinformation.
• Roblox, a gaming social network primarily for children, prohibits "[s]ingling out a user or group for ridicule or abuse," any sort of sexual content, depictions of and support for war or violence, and any discussion of political parties or candidates.
• Vegan Forum allows non-vegans but "will not tolerate members who promote contrary agendas."
Policies and Guidelines , YouTube, https://www.youtube.com/creators/how-things-work/policies-guidelines (last accessed May 15, 2022).
Facebook Community Standards , Meta, https://transparency.fb.com/policies/community-standards (last accessed May 15, 2022).
The Twitter Rules , Twitter, https://help.twitter.com/en/rules-and-policies/twitter-rules (last accessed May 15, 2022).
Roblox Community Standards , Roblox, https://en.help.roblox.com/hc/en-us/articles/203313410-Roblox-Community-Standards (last accessed May 15, 2022).
Membership Rules , Vegan Forum, https://www.vegan-forum.org/help/terms (last accessed May 15, 2022).
And to be clear, some platforms exercise editorial judgment to promote explicitly political agendas. On the right, ProAmericaOnly promises "No Censorship | No Shadow Bans | No BS | NO LIBERALS." And on the left, The Democratic Hub says that its "online community is for liberals, progressives, moderates, independent[s] and anyone who has a favorable opinion of Democrats and/or liberal political views or is critical of Republican ideology."
ProAmericaOnly, https://proamericaonly.org (last accessed May 15, 2022).
The Democratic Hub, https://www.democratichub.com (last accessed May 15, 2022).
All such decisions about what speech to permit, disseminate, prohibit, and deprioritize—decisions based on platforms’ own particular values and views—fit comfortably within the Supreme Court's editorial-judgment precedents.
Separately, but similarly, platforms’ content-moderation activities qualify as First-Amendment-protected expressive conduct under Coral Ridge and FLFNB I . A reasonable person would likely infer "some sort of message" from, say, Facebook removing hate speech or Twitter banning a politician. Indeed, unless posts and users are removed randomly , those sorts of actions necessarily convey some sort of message—most obviously, the platforms’ disagreement with or disapproval of certain content, viewpoints, or users. Here, for instance, the driving force behind S.B. 7072 seems to have been a perception (right or wrong) that some platforms’ content-moderation decisions reflected a "leftist" bias against "conservative" views—which, for better or worse, surely counts as expressing a message. That observers perceive bias in platforms’ content-moderation decisions is compelling evidence that those decisions are indeed expressive.
In an effort to rebut this point, the State responds that because the vast majority of content that makes it onto social-media platforms is never reviewed—let alone removed or deprioritized—platforms aren't engaged in conduct of sufficiently expressive quality to merit First Amendment protection. See Reply Br. of Appellant at 16. With respect, the State's argument misses the point. The "conduct" that the challenged provisions regulate—what this entire appeal is about—is the platforms’ "censorship" of users’ posts—i.e. , the posts that platforms do review and remove or deprioritize. The question, then, is whether that conduct is expressive. For reasons we've explained, we think it unquestionably is. B
The fact that some social-media platforms choose to allow most content doesn't undermine their claim to First Amendment protection. See U.S. Telecom Ass'n v. FCC , 855 F.3d 381, 429 (D.C. Cir. 2017) (Kavanaugh, J., dissental) (explaining that the fact that platforms "have not been aggressively exercising their editorial discretion does not mean that they have no right to exercise their editorial discretion").
Texas and several other states as amici insist that social-media platforms’ "censorship, deplatforming, and shadow banning" activities aren't inherently expressive conduct for First Amendment purposes because the platforms don't "inten[d] to convey a particularized message." States’ Amicus Br. at 6–7 (quoting FLFNB I , 901 F.3d at 1240 ). They note that the platforms’ most prominent CEOs have denied accusations that their content rules are based on ideology or political perspective. But while an "intent to convey a particularized message" was once necessary to qualify as expressive conduct, FLFNB I explained that "[s]ince then ... the [Supreme] Court has clarified that a ‘narrow, succinctly articulable message is not a condition of constitutional protection’ because ‘if confined to expressions conveying a "particularized message" [the First Amendment] would never reach the unquestionably shielded painting of Jackson Pollack, music of Arnold Schoenberg, or Jabberwocky verse of Lewis Carroll.’ " FLFNB I , 901 F.3d at 1240 (last alteration in original) (quoting Hurley , 515 U.S. at 569, 115 S.Ct. 2338 )). Instead, as explained in text, we require only that a "reasonable person would interpret [the conduct] as some sort of message." Id. (quoting Holloman ex rel. Holloman v. Harland , 370 F.3d 1252, 1270 (11th Cir. 2004) ).
To the extent that the states argue that social-media platforms lack the requisite "intent" to convey a message, we find it implausible that platforms would engage in the laborious process of defining detailed community standards, identifying offending content, and removing or deprioritizing that content if they didn't intend to convey "some sort of message." Unsurprisingly, the record in this case confirms platforms’ intent to communicate messages through their content-moderation decisions—including that certain material is harmful or unwelcome on their sites. See, e.g. , Doc. 25-1 at 2 (declaration of YouTube executive explaining that its approach to content moderation "is to remove content that violates [its] policies (developed with outside experts to prevent real-world harms), reduce the spread of harmful misinformation ... and raise authoritative and trusted content"); Facebook Community Standards , supra (noting that Facebook moderates content "in service of" its "values" of "authenticity," "safety," "privacy," and "dignity").
In the face of the editorial-judgment and expressive-conduct cases, the State insists that S.B. 7072 doesn't even implicate, let alone violate, the First Amendment. The State's first line of argument relies on two cases— PruneYard Shopping Center v. Robins , 447 U.S. 74, 100 S.Ct. 2035, 64 L.Ed.2d 741 (1980), and Rumsfeld v. Forum for Academic & Institutional Rights, Inc. , 547 U.S. 47, 126 S.Ct. 1297, 164 L.Ed.2d 156 (2006) (" FAIR ")—in which the Supreme Court upheld government regulations that effectively compelled private actors to "host" others’ speech. The State's second argument seeks to evade—or at least minimize—First Amendment scrutiny by labeling social-media platforms "common carriers." We find neither argument convincing.
1
We begin with the "hosting" cases. The first decision to which the State points, PruneYard , is readily distinguishable. There, the Supreme Court affirmed a state court's decision requiring a privately owned shopping mall to allow members of the public to circulate petitions on its property. 447 U.S. at 76–77, 88, 100 S.Ct. 2035. In that case, though, the only First Amendment interest that the mall owner asserted was the right "not to be forced by the State to use [its] property as a forum for the speech of others." Id. at 85, 100 S.Ct. 2035. The Supreme Court's subsequent decisions in Pacific Gas and Hurley distinguished and cabined PruneYard . The Pacific Gas plurality explained that "[n]otably absent from PruneYard was any concern that access to this area might affect the shopping center owner's exercise of his own right to speak: the owner did not even allege that he objected to the content of the pamphlets." 475 U.S. at 12, 106 S.Ct. 903 (plurality op.); see also id. at 24, 106 S.Ct. 903 (Marshall, J., concurring in the judgment) ("While the shopping center owner in PruneYard wished to be free of unwanted expression, he nowhere alleged that his own expression was hindered in the slightest."); Hurley , 515 U.S. at 580, 115 S.Ct. 2338 (noting that the "principle of speaker's autonomy was simply not threatened in" PruneYard ). Because NetChoice asserts that S.B. 7072 interferes with the platforms’ own speech rights by forcing them to carry messages that contradict their community standards and terms of service, PruneYard is inapposite.
FAIR may be a bit closer, but it, too, is distinguishable. In that case, the Supreme Court upheld a federal statute—the Solomon Amendment—that required law schools, as a condition to receiving federal funding, to allow military recruiters the same access to campuses and students as any other employer. 547 U.S. at 56, 126 S.Ct. 1297. The schools, which had restricted recruiters’ access because they opposed the military's "Don't Ask, Don't Tell" policy regarding gay servicemembers, protested that requiring them to host recruiters and post notices on their behalf violated the First Amendment. Id. at 51, 126 S.Ct. 1297. But the Court held that the law didn't implicate the First Amendment because it "neither limit[ed] what law schools may say nor require[d] them to say anything." Id. at 60, 126 S.Ct. 1297. In so holding, the Court rejected two arguments for why the First Amendment should apply—(1) that the Solomon Amendment unconstitutionally required law schools to host the military's speech, and (2) that it restricted the law schools’ expressive conduct. Id. at 60–61, 126 S.Ct. 1297.
With respect to the first argument, the Court distinguished Miami Herald , Pacific Gas , and Hurley on the ground that, in those cases, "the complaining speaker's own message was affected by the speech it was forced to accommodate." Id. at 63, 126 S.Ct. 1297. The Solomon Amendment's requirement that schools host military recruiters did "not affect the law schools’ speech," the Court said, "because the schools [were] not speaking when they host[ed] interviews and recruiting receptions": Recruiting activities, the Court reasoned, simply aren't "inherently expressive"—they're not speech —in the way that editorial pages, newsletters, and parades are. Id. at 64, 126 S.Ct. 1297. Therefore, the Court concluded, "accommodation of a military recruiter's message is not compelled speech because the accommodation does not sufficiently interfere with any message of the school." Id. Nor did the Solomon Amendment's requirement that schools send notices on behalf of military recruiters unconstitutionally compel speech, the Court held, as it was merely incidental to the law's regulation of conduct . Id. at 62, 126 S.Ct. 1297.
The FAIR Court also rejected the law schools’ second argument—namely, that the Solomon Amendment restricted their inherently expressive conduct. The schools’ refusal to allow military recruiters on campus was expressive, the Court emphasized, "only because [they] accompanied their conduct with speech explaining it." Id . at 66, 126 S.Ct. 1297. In the normal course, the Court said, an observer "who s[aw] military recruiters interviewing away from the law school [would have] no way of knowing" whether the school was expressing a message or, instead, the school's rooms just happened to be full or the recruiters just preferred to interview elsewhere. Id. Because "explanatory speech" was necessary to understand the message conveyed by the law schools’ conduct, the Court concluded, that conduct wasn't "inherently expressive." Id.
FAIR isn't controlling here because social-media platforms warrant First Amendment protection on both of the grounds that the Court held that law-school recruiting services didn't.
First, S.B. 7072 interferes with social-media platforms’ own "speech" within the meaning of the First Amendment. Social-media platforms, unlike law-school recruiting services, are in the business of disseminating curated collections of speech. A social-media platform that "exercises editorial discretion in the selection and presentation of" the content that it disseminates to its users "engages in speech activity." Ark. Educ. TV Comm'n , 523 U.S. at 674, 118 S.Ct. 1633 ; see Sorrell , 564 U.S. at 570, 131 S.Ct. 2653 (explaining that the "dissemination of information" is "speech within the meaning of the First Amendment"); Bartnicki v. Vopper , 532 U.S. 514, 527, 121 S.Ct. 1753, 149 L.Ed.2d 787 (2001) ("If the acts of ‘disclosing’ and ‘publishing’ information do not constitute speech, it is hard to imagine what does fall within that category." (cleaned up)). Just as the must-carry provisions in Turner "reduce[d] the number of channels over which cable operators exercise[d] unfettered control" and therefore triggered First Amendment scrutiny, 512 U.S. at 637, 114 S.Ct. 2445, S.B. 7072's content-moderation restrictions reduce the number of posts over which platforms can exercise their editorial judgment. Because a social-media platform itself "spe[aks]" by curating and delivering compilations of others’ speech—speech that may include messages ranging from Facebook's promotion of authenticity, safety, privacy, and dignity to ProAmericaOnly's "No BS | No LIBERALS"—a law that requires the platform to disseminate speech with which it disagrees interferes with its own message and thereby implicates its First Amendment rights.
Second, social-media platforms are engaged in inherently expressive conduct of the sort that the Court found lacking in FAIR . As we were careful to explain in FLFNB I , FAIR "does not mean that conduct loses its expressive nature just because it is also accompanied by other speech." 901 F.3d at 1243–44. Rather, "[t]he critical question is whether the explanatory speech is necessary for the reasonable observer to perceive a message from the conduct." Id . at 1244. And we held that an advocacy organization's food-sharing events constituted expressive conduct from which, "due to the context surrounding them, the reasonable observer would infer some sort of message"—even without reference to the words "Food Not Bombs" on the organization's banners. Id. at 1245. Context, we held, is what differentiates "activity that is sufficiently expressive [from] similar activity that is not"—e.g. , "the act of sitting down" from "the sit-in by African Americans at a Louisiana library" protesting segregation. Id. at 1241 (citing Brown v. Louisiana , 383 U.S. 131, 141–42, 86 S.Ct. 719, 15 L.Ed.2d 637 (1966) ).
Unlike the law schools in FAIR , social-media platforms’ content-moderation decisions communicate messages when they remove or "shadow-ban" users or content. Explanatory speech isn't "necessary for the reasonable observer to perceive a message from," for instance, a platform's decision to ban a politician or remove what it perceives to be misinformation. Id. at 1244. Such conduct—the targeted removal of users’ speech from websites whose primary function is to serve as speech platforms—conveys a message to the reasonable observer "due to the context surrounding" it. Id. at 1245 ; see also Coral Ridge , 6 F.4th at 1254. Given the context, a reasonable observer witnessing a platform remove a user or item of content would infer, at a minimum, a message of disapproval. Thus, social-media platforms engage in content moderation that is inherently expressive notwithstanding FAIR .
One might object that users know that social-media platforms remove content, deplatform users, or deprioritize posts only because of the platforms’ speech explaining those decisions—so the conduct itself isn't inherently expressive. See FAIR , 547 U.S. at 66, 126 S.Ct. 1297. But unlike the person who observes military recruiters interviewing away from a law school and has no idea whether the school is thereby expressing a message, see id. , we find it unlikely that a reasonable observer would think, for instance, that the reason he rarely or never sees pornography on Facebook is that none of Facebook's billions of users ever posts any. The more reasonable inference to be drawn from the fact that certain types of content rarely or never appear when a user browses a social-media site—or why certain posts disappear or prolific Twitter users vanish from the platform after making controversial statements—is that the platform disapproves.
It might be, we suppose, that some content-moderation decisions—for instance, to prioritize or deprioritize individual posts—are so subtle that users wouldn't notice them but for the platforms’ speech explaining their actions. But even if some subset of content-moderation activities wouldn't count as inherently expressive conduct under FAIR and FLFNB I , many are sufficiently transparent that users would likely notice them and, in context, infer from them "some sort of message"—even in the absence of explanatory speech. Specifically, it's likely clear to viewers that platforms take down individual posts, remove entire categories of content, and deplatform other users—and that such actions express messages. "Shadow-banning" would also likely be apparent and communicate a message to a reasonable user who knows that she follows a particular poster but doesn't see that poster's content, for instance, in her feed or search results. Thus, even if some content moderation isn't inherently expressive, much of it is. See United States v. Stevens , 559 U.S. 460, 473, 130 S.Ct. 1577, 176 L.Ed.2d 435 (2010) (noting that a statute facially violates the First Amendment if "a substantial number of its applications are unconstitutional, judged in relation to its plainly legitimate sweep" (quotation marks omitted)). As explained in text, S.B. 7072's content-moderation restrictions all regulate platforms’ inherently expressive conduct and trigger heightened scrutiny. See infra Part II.C.
* * *
The State asserts that PruneYard and FAIR —and, for that matter, the Supreme Court's editorial-judgment decisions—establish three "guiding principles" that should lead us to conclude that S.B. 7072 doesn't implicate the First Amendment. We disagree.
The first principle—that a regulation must interfere with the host's ability to speak in order to implicate the First Amendment—does find support in FAIR . See 547 U.S. at 64, 126 S.Ct. 1297. Even so, the State's argument —that S.B. 7072 doesn't interfere with platforms’ ability to speak because they can still affirmatively dissociate themselves from the content that they disseminate—encounters two difficulties. As an initial matter, in at least one key provision, the Act defines the term "censor" to include "posting an addendum," i.e. , a disclaimer—and thereby explicitly prohibits the very speech by which a platform might dissociate itself from users’ messages. Fla. Stat. § 501.2041(1)(b). Moreover, and more fundamentally, if the exercise of editorial judgment—the decision about whether, to what extent, and in what manner to disseminate third-party content—is itself speech or inherently expressive conduct, which we have said it is, then the Act does interfere with platforms’ ability to speak. See Pacific Gas , 475 U.S. at 10–12, 16, 106 S.Ct. 903 (plurality op.) (noting that if the government could compel speakers to "propound ... messages with which they disagree," the First Amendment's protection "would be empty, for the government could require speakers to affirm in one breath that which they deny in the next").
The State's second principle—that in order to trigger First Amendment scrutiny a regulation must create a risk that viewers or listeners might confuse a user's and the platform's speech—finds little support in our precedent. Consumer confusion simply isn't a prerequisite to First Amendment protection. In Miami Herald , for instance, even though no reasonable observer would have mistaken a political candidate's statutorily mandated right-to-reply column for the newspaper reversing its earlier criticism, the Supreme Court deemed the paper's editorial judgment to be protected. See 418 U.S. at 244, 258, 94 S.Ct. 2831. Nor was there a risk of consumer confusion in Turner : No reasonable person would have thought that the cable operator there endorsed every message conveyed by every speaker on every one of the channels it carried, and yet the Court stated categorically that the operator's editorial discretion was protected. See 512 U.S. at 636–37, 114 S.Ct. 2445. Moreover, it seems to us that the State's confusion argument boomerangs back around on itself: If a platform announces a community standard prohibiting, say, hate speech, but is then barred from removing or even disclaiming posts containing what it perceives to be hate speech, there's a real risk that a viewer might erroneously conclude that the platform doesn't consider those posts to constitute hate speech.
The State's final principle—that in order to receive First Amendment protection a platform must curate and present speech in such a way that a "common theme" emerges—is similarly flawed. Hurley held that "a private speaker does not forfeit constitutional protection simply by combining multifarious voices, or by failing to edit their themes to isolate an exact message as the exclusive subject matter of the speech." 515 U.S. at 569–70, 115 S.Ct. 2338 ; see FLFNB I , 901 F.3d at 1240 (citing Hurley for the proposition that a "particularized message" isn't required for conduct to qualify for First Amendment protection). Moreover, even if one could theoretically attribute a common theme to a parade, Turner makes clear that no such theme is required: It seems to us inconceivable that one could ascribe a common theme to the cable operator's choice there to carry hundreds of disparate channels, and yet the Court held that the First Amendment protected the operator's editorial discretion. 512 U.S. at 636, 114 S.Ct. 2445.
Of course, to the extent that one might say that a cable operator is pursuing, say, a "theme" of non -obscenity, the very same sort of thing could be said of social-media platforms. See Facebook Community Standards , supra (explaining that Facebook prohibits many categories of content as it seeks to foster the values of "authenticity," "safety," "privacy," and "dignity").
In short, the State's reliance on PruneYard and FAIR and its attempts to distinguish the editorial-judgment line of cases are unavailing.
2
The State separately seeks to evade (or at least minimize) First Amendment scrutiny by labeling social-media platforms "common carriers." The crux of the State's position, as expressed at oral argument, is that "[t]here are certain services that society determines people shouldn't be required to do without," and that this is "true of social media in the 21st century." Oral Arg. at 18:37 et seq . For reasons we explain, we disagree.
We say "or at least minimize" because it's not entirely clear what work a common-carrier designation would perform in a First Amendment analysis. While the Supreme Court has suggested that common carriers "receive a lower level of First Amendment protection than other forms of communication," it has never explained the precise level of protection that they do receive. Christopher S. Yoo, The First Amendment, Common Carriers, and Public Accommodations: Net Neutrality, Digital Platforms, and Privacy , 1 J. Free Speech L. 463, 480–82 (2021) ; see also FCC v. League of Women Voters of Cal. , 468 U.S. 364, 378, 104 S.Ct. 3106, 82 L.Ed.2d 278 (1984) (noting only that "[u]nlike common carriers, broadcasters are entitled under the First Amendment to exercise the widest journalistic freedom consistent with their public duties" (cleaned up)). Moreover, at common law, even traditional common carriers like innkeepers were allowed to exclude drunks, criminals, diseased persons, and others who were "obnoxious to [ ] others," and telegraph companies weren't required to accept "obscene, blasphemous, profane or indecent messages." See TechFreedom Amicus Br. at 29 (quoting 1 Bruce Wyman, The Special Law Governing Public Service Corporations, and All Others Engaged in Public Employment §§ 632–33 (1911)). Because S.B. 7072 prevents platforms from removing content regardless of its impact on others, it appears to extend beyond the historical obligations of common carriers.
At the outset, we confess some uncertainty whether the State means to argue (a) that platforms are already common carriers, and so possess no (or only minimal) First Amendment rights, or (b) that the State can, by dint of ordinary legislation, make them common carriers, thereby abrogating any First Amendment rights that they currently possess. Whatever the State's position, we are unpersuaded.
a
The first version of the argument fails because, in point of fact, social-media platforms are not—in the nature of things, so to speak—common carriers. That is so for at least three reasons.
First, social-media platforms have never acted like common carriers. "[I]n the communications context," common carriers are entities that "make a public offering to provide communications facilities whereby all members of the public who choose to employ such facilities may communicate or transmit intelligence of their own design and choosing"—they don't "make individualized decisions, in particular cases, whether and on what terms to deal." FCC v. Midwest Video Corp ., 440 U.S. 689, 701, 99 S.Ct. 1435, 59 L.Ed.2d 692 (1979) (cleaned up). While it's true that social-media platforms generally hold themselves open to all members of the public, they require users, as preconditions of access, to accept their terms of service and abide by their community standards. In other words, Facebook is open to every individual if, but only if, she agrees not to transmit content that violates the company's rules. Social-media users, accordingly, are not freely able to transmit messages "of their own design and choosing" because platforms make—and have always made—"individualized" content- and viewpoint-based decisions about whether to publish particular messages or users.
Second, Supreme Court precedent strongly suggests that internet companies like social-media platforms aren't common carriers. While the Court has applied less stringent First Amendment scrutiny to television and radio broadcasters, the Turner Court cabined that approach to "broadcast" media because of its "unique physical limitations"—chiefly, the scarcity of broadcast frequencies. 512 U.S. at 637–39, 114 S.Ct. 2445. Instead of "comparing cable operators to electricity providers, trucking companies, and railroads—all entities subject to traditional economic regulation"—the Turner Court "analogized the cable operators [in that case] to the publishers, pamphleteers, and bookstore owners traditionally protected by the First Amendment." U.S. Telecom Ass'n v. FCC , 855 F.3d 381, 428 (D.C. Cir. 2017) (Kavanaugh, J., dissental); see Turner , 512 U.S. at 639, 114 S.Ct. 2445. And indeed, the Court explicitly distinguished online from broadcast media in Reno v. American Civil Liberties Union , emphasizing that the "vast democratic forums of the Internet" have never been "subject to the type of government supervision and regulation that has attended the broadcast industry." 521 U.S. 844, 868–69, 117 S.Ct. 2329, 138 L.Ed.2d 874 (1997). These precedents demonstrate that social-media platforms should be treated more like cable operators, which retain their First Amendment right to exercise editorial discretion, than traditional common carriers.
Finally, Congress has distinguished internet companies from common carriers. The Telecommunications Act of 1996 explicitly differentiates "interactive computer services"—like social-media platforms—from "common carriers or telecommunications services." See, e.g. , 47 U.S.C. § 223(e)(6) ("Nothing in this section shall be construed to treat interactive computer services as common carriers or telecommunications carriers."). And the Act goes on to provide protections for internet companies that are inconsistent with the traditional common-carrier obligation of indiscriminate service. In particular, it explicitly protects internet companies’ ability to restrict access to a plethora of material that they might consider "objectionable." Id. § 230(c)(2)(A). Federal law's recognition and protection of social-media platforms’ ability to discriminate among messages—disseminating some but not others—is strong evidence that they are not common carriers with diminished First Amendment rights.
b
If social-media platforms are not common carriers either in fact or by law, the State is left to argue that it can force them to become common carriers, abrogating or diminishing the First Amendment rights that they currently possess and exercise. Neither law nor logic recognizes government authority to strip an entity of its First Amendment rights merely by labeling it a common carrier. Quite the contrary, if social-media platforms currently possess the First Amendment right to exercise editorial judgment, as we hold it is substantially likely they do, then any law infringing that right—even one bearing the terminology of "common carri[age]"—should be assessed under the same standards that apply to other laws burdening First-Amendment-protected activity. See Denver Area Educ. Telecomm. Consortium, Inc. v. FCC , 518 U.S. 727, 825, 116 S.Ct. 2374, 135 L.Ed.2d 888 (1996) (Thomas, J., concurring in the judgment in part and dissenting in part) ("Labeling leased access a common carrier scheme has no real First Amendment consequences."); Cablevision Sys. Corp. v. FCC , 597 F.3d 1306, 1321–22 (D.C. Cir. 2010) (Kavanaugh, J., dissenting) (explaining that because video programmers have a constitutional right to exercise editorial discretion, "the Government cannot compel [them] to operate like ‘dumb pipes’ or ‘common carriers’ that exercise no editorial control"); U.S. Telecom Ass'n , 855 F.3d at 434 (Kavanaugh, J., dissental) ("Can the Government really force Facebook and Google ... to operate as common carriers?").
* * *
The State's best rejoinder is that because large social-media platforms are clothed with a "public trust" and have "substantial market power," they are (or should be treated like) common carriers. Br. of Appellants at 35–37; see Biden v. Knight First Amend. Inst. , ––– U.S. ––––, 141 S. Ct. 1220, 1226, 209 L.Ed.2d 519 (2021) (Thomas, J., concurring). These premises aren't uncontroversial, but even if they're true, they wouldn't change our conclusion. The State doesn't argue that market power and public importance are alone sufficient reasons to recharacterize a private company as a common carrier; rather, it acknowledges that the "basic characteristic of common carriage is the requirement to hold oneself out to serve the public indiscriminately." Br. of Appellants at 35 (quoting U.S. Telecom. Ass'n v. FCC , 825 F.3d 674, 740 (D.C. Cir. 2016) ); see Knight , 141 S. Ct. at 1223 (Thomas, J., concurring). The problem, as we've explained, is that social-media platforms don't serve the public indiscriminately but, rather, exercise editorial judgment to curate the content that they display and disseminate.
The State seems to argue that even if platforms aren't currently common carriers, their market power and public importance might justify their "legislative designation ... as common carriers." Br. of Appellants at 36; see Knight , 141 S. Ct. at 1223 (Thomas, J., concurring) (noting that the Court has suggested that common-carrier regulations "may be justified, even for industries not historically recognized as common carriers, when a business ... rises from private to be a public concern" (quotation marks omitted)). That might be true for an insurance or telegraph company, whose only concern is whether its "property" becomes "the means of rendering the service which has become of public interest." Knight , 141 S. Ct. at 1223 (Thomas, J., concurring) (quoting German All. Ins. Co. v. Lewis , 233 U.S. 389, 408, 34 S.Ct. 612, 58 L.Ed. 1011 (1914) ). But the Supreme Court has squarely rejected the suggestion that a private company engaging in speech within the meaning of the First Amendment loses its constitutional rights just because it succeeds in the marketplace and hits it big. See Miami Herald , 418 U.S. at 251, 258, 94 S.Ct. 2831.
In short, because social-media platforms exercise—and have historically exercised—inherently expressive editorial judgment, they aren't common carriers, and a state law can't force them to act as such unless it survives First Amendment scrutiny.
C
With one exception, we hold that the challenged provisions of S.B. 7072 trigger First Amendment scrutiny either (1) by restricting social-media platforms’ ability to exercise editorial judgment or (2) by imposing disclosure requirements. Here's a brief rundown.
S.B. 7072's content-moderation restrictions all limit platforms’ ability to exercise editorial judgment and thus trigger First Amendment scrutiny. The provisions that prohibit deplatforming candidates ( § 106.072(2) ), deprioritizing and "shadow-banning" content by or about candidates ( § 501.2041(2)(h) ), and censoring, deplatforming, or shadow-banning "journalistic enterprises" ( § 501.2041(2)(j) ) all clearly restrict platforms’ editorial judgment by preventing them from removing or deprioritizing content or users and forcing them to disseminate messages that they find objectionable.
The consistency requirement ( § 501.2041(2)(b) ) and the 30-day restriction ( § 501.2041(2)(c) ) also—if somewhat less obviously—restrict editorial judgment. Together, these provisions force platforms to remove (or retain) all content that is similar to material that they have previously removed (or retained). Even if a platform wants to retain or remove content in an inconsistent manner—for instance, to steer discourse in a particular direction—it may not do so. And even if a platform wants to leave certain content up and continue distributing it to users, it can't do so if within the past 30 days it's removed other content that a court might find to be similar. These provisions thus burden platforms’ right to make editorial judgments on a case-by-case basis or to change the types of content they'll disseminate—and, hence, the messages they express.
The user-opt-out requirement ( § 501.2041(2)(f), (g) ) also triggers First Amendment scrutiny because it forces platforms, upon a user's request, not to exercise the editorial discretion that they otherwise would in curating content—prioritizing some posts and deprioritizing others—in the user's feed. Even if a platform would prefer, for its own reasons, to give greater prominence to some posts while limiting the reach of others, the opt-out provision would prohibit it from doing so, at least with respect to some users.
S.B. 7072's disclosure provisions implicate the First Amendment, but for a different reason. These provisions don't directly restrict editorial judgment or expressive conduct, but indirectly burden platforms’ editorial judgment by compelling them to disclose certain information. Laws that compel commercial disclosures and thereby indirectly burden protected speech trigger relatively permissive First Amendment scrutiny, which we will explain. See Zauderer , 471 U.S. at 651, 105 S.Ct. 2265 ; Nat'l Inst. of Fam. & Life Advocs. v. Becerra , ––– U.S. ––––, 138 S. Ct. 2361, 2378, 201 L.Ed.2d 835 (2018) (" NIFLA ").
Finally, the exception: We hold that S.B. 7072's user-data-access requirement ( § 501.2041(2)(i) ) does not trigger First Amendment scrutiny. This provision—which requires social-media platforms to allow deplatformed users to access their own data stored on the platform's servers for at least 60 days—doesn't prevent or burden to any significant extent the exercise of editorial judgment or compel any disclosure.
It is theoretically possible that this provision could impose such an inordinate burden on the platforms’ First Amendment rights that some scrutiny would apply. But at this stage of the proceedings, the plaintiffs haven't shown a substantial likelihood of success on the merits of their claim that it implicates the First Amendment.
* * *
Taking stock: We conclude that social-media platforms’ content-moderation activities—permitting, removing, prioritizing, and deprioritizing users and posts—constitute "speech" within the meaning of the First Amendment. All but one of S.B. 7072's operative provisions implicate platforms’ First Amendment rights and are therefore subject to First Amendment scrutiny.
III
A
Having determined that it is substantially likely that S.B. 7072 triggers First Amendment scrutiny, we must now determine the level of scrutiny to apply—and to which provisions.
We begin with the basics. "[A] content-neutral regulation of expressive conduct is subject to intermediate scrutiny, while a regulation based on the content of the expression must withstand the additional rigors of strict scrutiny." FLFNB II , 11 F.4th at 1291 ; see also Turner , 512 U.S. at 643–44, 662, 114 S.Ct. 2445 (noting that although the challenged provisions "interfere[d] with cable operators’ editorial discretion," they were content-neutral and so would be subject only to intermediate scrutiny). A law is content-based if it "suppress[es], disadvantage[s], or impose[s] differential burdens upon speech because of its content," Turner , 512 U.S. at 642, 114 S.Ct. 2445 —i.e. , if it "applies to particular speech because of the topic discussed or the idea or message expressed," Reed v. Town of Gilbert , 576 U.S. 155, 163, 135 S.Ct. 2218, 192 L.Ed.2d 236 (2015). A law can be content-based either because it draws "facial distinctions ... defining regulated speech by particular subject matter" or because, though facially neutral, it "cannot be justified without reference to the content of the regulated speech." Id. at 163–64, 135 S.Ct. 2218 (quotation marks omitted).
Viewpoint-based laws—"[w]hen the government targets not subject matter, but particular views taken by speakers on a subject"—constitute "an egregious form of content discrimination." Rosenberger v. Rector & Visitors of Univ. of Va. , 515 U.S. 819, 829, 115 S.Ct. 2510, 132 L.Ed.2d 700 (1995). They "are prohibited," seemingly as a per se matter. Minn. Voters All. v. Mansky , ––– U.S. ––––, 138 S. Ct. 1876, 1885, 201 L.Ed.2d 201 (2018) ; see Turner , 512 U.S. at 642, 114 S.Ct. 2445 ("The government may not regulate speech based on hostility—or favoritism—towards the underlying message expressed." (quotation marks omitted and alteration adopted)).
1
NetChoice asks us to affirm the district court's conclusion that S.B. 7072's "viewpoint-based motivation " subjects the entire Act—every provision—"to strict scrutiny, root and branch." Doc. 113 at 25 (emphasis added). It's certainly true—as already explained—that at least a handful of S.B. 7072's key proponents candidly acknowledged their desire to combat what they perceived to be the "leftist" bias of the "big tech oligarchs" against "conservative" ideas. Id. It's also true that the Act applies only to a subset of speakers consisting of the largest social-media platforms and that the law's enacted findings refer to the platforms’ allegedly "unfair" censorship. See S.B. 7072 § (9), (10); Fla. Stat. § 501.2041(1)(g). But given the state of our (sometimes dissonant) precedents, we don't think that NetChoice is substantially likely to succeed on the merits of its claim that the entire Act is impermissibly viewpoint-based. Here's why.
We have held—"many times"—that "when a statute is facially constitutional, a plaintiff cannot bring a free-speech challenge by claiming that the lawmakers who passed it acted with a constitutionally impermissible purpose." In re Hubbard , 803 F.3d 1298, 1312 (11th Cir. 2015). In Hubbard , we cited (among other decisions) United States v. O'Brien for the proposition that courts shouldn't look to a law's legislative history to find an illegitimate motivation for an otherwise constitutional statute. Id. (citing United States v. O'Brien , 391 U.S. 367, 383, 88 S.Ct. 1673, 20 L.Ed.2d 672 (1968) ). The plaintiffs in O'Brien had challenged a law prohibiting the burning of draft cards on the ground that Congress's "purpose"—as evidenced in the statements of several legislators—was "to suppress freedom of speech." 391 U.S. at 382–83, 88 S.Ct. 1673. The Supreme Court refused to void the statute "on the basis of what fewer than a handful of Congressmen said about it" given that Congress "had the undoubted power to enact" it if legislators had only made " ‘wiser’ speech[es] about it." Id. at 384, 88 S.Ct. 1673 ; see also Arizona v. California , 283 U.S. 423, 455, 51 S.Ct. 522, 75 L.Ed. 1154 (1931) ("Into the motives which induced members of Congress to enact the [statute], this court may not inquire."). Even though the statute in O'Brien regulated expressive conduct and its legislative history suggested a viewpoint-based motivation, the O'Brien Court declined to invalidate the statute as a per se matter, or even apply strict scrutiny, but rather upheld the law under what we have come to call intermediate scrutiny. 391 U.S. at 382, 88 S.Ct. 1673.
To be fair, there is some support for NetChoice's motivation-based argument for invalidating S.B. 7072 in toto, but not enough to overcome the clear statements in Hubbard and O'Brien . It's true that the Supreme Court said in Turner that "even a regulation neutral on its face may be content based if its manifest purpose is to regulate speech because of the message it conveys." Turner , 512 U.S. at 645–46, 114 S.Ct. 2445 (emphasis added). And Turner cited, with a hazy "cf. " signal, Church of Lukumi Babalu Aye, Inc. v. Hialeah , 508 U.S. 520, 534–535, 113 S.Ct. 2217, 124 L.Ed.2d 472 (1993), which held that in the free-exercise context, it was appropriate to look beyond "the text of the laws at issue" to identify discriminatory animus against a minority religion. But NetChoice hasn't cited—and we're not aware of—any Supreme Court or Eleventh Circuit decision that relied on legislative history or statements by proponents to characterize as viewpoint-based a law challenged on free-speech grounds. The closest the Supreme Court seems to have come is in Sorrell v. IMS Health, Inc. , in which it looked to a statute's "formal legislative findings" to dispel "any doubt" that the challenged statute was content-based. 564 U.S. at 564–65, 131 S.Ct. 2653. But the only evidence of viewpoint-based motivation in S.B. 7072's enacted findings are the references to "unfair[ness]." Those, we think, are far less damning than the findings in Sorrell , which expressly—and startlingly—stated that the regulated speakers conveyed messages that were "often in conflict with the goals of the state." 564 U.S. at 565, 131 S.Ct. 2653 (quotation marks omitted).
To be sure, in Ranch House, Inc. v. Amerson , we observed that in determining whether a law prohibiting nude-dance establishments had the purpose of "suppress[ing] protected speech," a court could examine the statute's "legislative findings[,] ... legislative history, and studies and information of which legislators were clearly aware." 238 F.3d 1273, 1280 (11th Cir. 2001). But Ranch House is largely inapposite. First, Ranch House seems, at most, to have ratified the possibility that a legislature's content-neutral purpose—combatting nude-dance establishments’ "secondary effects"—could save a law that facially discriminated on the basis of content. Id. at 1279–80 (citing City of Renton v. Playtime Theatres, Inc. , 475 U.S. 41, 46–48, 106 S.Ct. 925, 89 L.Ed.2d 29 (1986) ). That's the opposite of what NetChoice asks us to do here—i.e. , to invalidate a facially viewpoint-neutral law on the basis of its legislative history. Second, Ranch House recognized that the "[s]econdary effects doctrine is an exception to the general rule that a statute which on its face distinguishes among particular types of speech or expression by content is subject to the strictest scrutiny." Id. at 1282. We decline to extend Ranch House ’s limited endorsement of legislative-history reviews beyond the unique nude-dancing and secondary-effects contexts.
Finally, the fact that S.B. 7072 targets only a subset of social-media platforms isn't enough to subject the entire law to strict scrutiny or per se invalidation. It's true that the Supreme Court's "precedents are deeply skeptical of laws that distinguish among different speakers, allowing speech by some but not others" because they "run the risk that the State has left unburdened those speakers whose messages are in accord with its own views." NIFLA , 138 S. Ct. at 2378 (quotation marks omitted); cf. Minneapolis Star & Tribune Co. v. Minn. Comm'r of Revenue , 460 U.S. 575, 592, 103 S.Ct. 1365, 75 L.Ed.2d 295 (1983) (noting that the power to "single[ ] out a few members of the press presents such a potential for abuse that no interest suggested by [the State] can justify the scheme"). But "[i]t would be error to conclude ... that the First Amendment mandates strict scrutiny for any speech regulation that applies to one medium (or a subset thereof) but not others": "[H]eightened scrutiny is unwarranted when the differential treatment is ‘justified by some special characteristic of’ the particular medium being regulated." Turner , 512 U.S. at 660–61, 114 S.Ct. 2445 (quoting Minneapolis Star , 460 U.S. at 585, 103 S.Ct. 1365 ). S.B. 7072's application to only the largest social-media platforms might be viewpoint-motivated, or it might be based on some other "special characteristic" of large platforms—for instance, their market power. See Appellant's App'x at 237–46. Given Hubbard and O'Brien —and in the absence of clear precedent enabling us to find a viewpoint-discriminatory purpose based on legislative history—we conclude that NetChoice hasn't shown a substantial likelihood of success on the merits of its argument that S.B. 7072 should be stricken, or subject to strict scrutiny, in its entirety.
NetChoice suggests that speaker-based laws trigger strict scrutiny, but on our reading of precedent, speaker-based laws don't constitute an analytical category distinct from content-based and viewpoint-based laws. Rather, speaker-based distinctions trigger strict scrutiny—or perhaps face per se invalidation—when they indicate underlying content- or viewpoint-based discrimination . See Turner , 512 U.S. at 658, 114 S.Ct. 2445 ("[L]aws favoring some speakers over others demand strict scrutiny when the legislature's speaker preference reflects a content preference ." (emphasis added)); Reed , 576 U.S. at 170, 135 S.Ct. 2218 ("Characterizing a distinction as speaker-based is only the beginning—not the end—of our inquiry."). While the Sorrell Court noted that the challenged law imposed "a content- and speaker-based burden" that warranted "heightened scrutiny," it's not clear that the law's speaker-based distinctions would have mandated heightened scrutiny had the law not also been content- and viewpoint-based. 564 U.S. at 570–72, 131 S.Ct. 2653.
Given the somewhat unsettled state of precedent, we needn't—and don't—decide whether courts can ever refer to a statute's legislative and enactment history to find it viewpoint-based.
2
Having determined that we cannot use the Act's chief proponents’ statements as a basis to invalidate S.B. 7072 "root and branch," we must proceed on a more nuanced basis to determine what sort of scrutiny each provision—or category of provisions—triggers.
To start, we hold that it is substantially likely that what we have called the Act's content-moderation restrictions are subject to either strict or intermediate First Amendment scrutiny, depending on whether they are content-based or content-neutral. See FLFNB II , 11 F.4th at 1291–92. Some of these provisions are self-evidently content-based and thus subject to strict scrutiny. The journalistic-enterprises provision, for instance, prohibits a platform from making content-moderation decisions concerning any "journalistic enterprise based on the content of " its posts, Fla. Stat. § 501.2041(2)(j) (emphasis added), and thus applies "because of the ... message" that the platform's decision expresses, Reed , 576 U.S. at 163, 135 S.Ct. 2218 : Removing a journalistic enterprise's post, for instance, because it is duplicative or too big is permissible, but removing a post to communicate disapproval of its content isn't. Similarly, the restriction on deprioritizing posts "about ... a candidate," id . § 501.2041(2)(h), regulates speech based on "the topic discussed," Reed , 576 U.S. at 163, 135 S.Ct. 2218, and is therefore clearly content-based. At the other end of the spectrum, the candidate-deplatforming ( § 106.072(2) ) and user-opt-out ( § 501.2041(2)(f), (g) ) provisions are pretty obviously content-neutral. Neither a prohibition on banishing political candidates nor a requirement that platforms allow users to decline content curation depends in any way on the substance of platforms’ content-moderation decisions.
Some of the provisions—for instance, § 501.2041(2)(b) ’s requirement that platforms exercise their content-moderation authority "consistently"—may exhibit both content-based and content-neutral characteristics. Ultimately, though, we find that we needn't precisely categorize each and every one of S.B. 7072's content-moderation restrictions because it is substantially likely that they are all "regulation[s] of expressive conduct" that, at the very least, trigger intermediate scrutiny, FLFNB II , 11 F.4th at 1291–92 —and, for reasons we'll explain in the next Part, none survive even that, cf. Sorrell , 564 U.S. at 571, 131 S.Ct. 2653 (noting that because "the outcome is the same whether a special commercial speech inquiry or a stricter form of judicial scrutiny is applied ... there is no need to determine whether all speech hampered by [the law] is commercial").
A different standard applies to S.B. 7072's disclosure provisions— § 106.072(4) and § 501.2041(2)(a), (c), (e), (4). These are content-neutral regulations requiring social-media platforms to disclose "purely factual and uncontroversial information" about their conduct toward their users and the "terms under which [their] services will be available," which are assessed under the standard announced in Zauderer . 471 U.S. at 651, 105 S.Ct. 2265. While "restrictions on nonmisleading commercial speech regarding lawful activity must withstand intermediate scrutiny," when "the challenged provisions impose a disclosure requirement rather than an affirmative limitation on speech ... the less exacting scrutiny described in Zauderer governs our review." Milavetz, Gallop & Milavetz, P.A. v. United States , 559 U.S. 229, 249, 130 S.Ct. 1324, 176 L.Ed.2d 79 (2010). Although this standard is typically applied in the context of advertising and to the government's interest in preventing consumer deception, we think it is broad enough to cover S.B. 7072's disclosure requirements—which, as the State contends, provide users with helpful information that prevents them from being misled about platforms’ policies.
B
At last, it is time to apply the requisite First Amendment scrutiny. We hold that it is substantially likely that none of S.B. 7072's content-moderation restrictions survive intermediate—let alone strict—scrutiny. We further hold that there is a substantial likelihood that the "thorough explanation" disclosure requirement ( § 501.2041(2)(d) ) is unconstitutional. As for the remaining disclosure provisions, we hold that it is not substantially likely that they are unconstitutional.
We agree with the State that only those provisions of the Act that are substantially likely to be unconstitutional should be enjoined. The Act contains a severability clause that says that the invalidity of any provision "shall not affect other provisions or applications of the act which can be given effect without" it. S.B. 7072 § 6. Under Florida law, "[t]he severability of a statutory provision is determined by its relation to the overall legislative intent of the statute of which it is a part, and whether the statute, less the invalid provisions, can still accomplish this intent." Emerson v. Hillsborough County , 312 So. 3d 451, 460 (Fla. 2021). The plaintiff bears the burden to establish that the measure isn't severable. Id. Here, the severability clause reflects the Florida legislature's intent to give effect to every constitutionally permissible provision of the Act, and, with the exception of its argument that the entire Act is viewpoint-based, NetChoice hasn't argued that any of the provisions are inseverable.
1
We'll start with S.B. 7072's content-moderation restrictions. While some of these provisions are likely subject to strict scrutiny, it is substantially likely that none survive even intermediate scrutiny. When a law is subject to intermediate scrutiny, the government must show that it "is narrowly drawn to further a substantial governmental interest ... unrelated to the suppression of free speech." FLFNB II , 11 F.4th at 1291. Narrow tailoring in this context means that the regulation must be "no greater than is essential to the furtherance of [the government's] interest." O'Brien , 391 U.S. at 377, 88 S.Ct. 1673.
We think it substantially likely that S.B. 7072's content-moderation restrictions do not further any substantial governmental interest—much less any compelling one. Indeed, the State's briefing doesn't even argue that these provisions can survive heightened scrutiny. (The State seems to have wagered pretty much everything on the argument that S.B. 7072's provisions don't trigger First Amendment scrutiny at all.) Nor can we discern any substantial or compelling interest that would justify the Act's significant restrictions on platforms’ editorial judgment. We'll briefly explain and reject two possibilities that the State might offer.
The State might theoretically assert some interest in counteracting "unfair" private "censorship" that privileges some viewpoints over others on social-media platforms. See S.B. 7072 § 1(9). But a state "may not burden the speech of others in order to tilt public debate in a preferred direction," Sorrell , 564 U.S. at 578–79, 131 S.Ct. 2653, or "advance some points of view," Pacific Gas , 475 U.S. at 20, 106 S.Ct. 903 (plurality op.). Put simply, there's no legitimate—let alone substantial—governmental interest in leveling the expressive playing field. Nor is there a substantial governmental interest in enabling users—who, remember, have no vested right to a social-media account—to say whatever they want on privately owned platforms that would prefer to remove their posts: By preventing platforms from conducting content moderation—which, we've explained, is itself expressive First-Amendment-protected activity—S.B. 7072 "restrict[s] the speech of some elements of our society in order to enhance the relative voice of others"—a concept "wholly foreign to the First Amendment." Buckley v. Valeo , 424 U.S. 1, 48–49, 96 S.Ct. 612, 46 L.Ed.2d 659 (1976). At the end of the day, preventing "unfair[ness]" to certain users or points of view isn't a substantial governmental interest; rather, private actors have a First Amendment right to be "unfair"—which is to say, a right to have and express their own points of view. Miami Herald , 418 U.S. at 258, 94 S.Ct. 2831.
The State might also assert an interest in "promoting the widespread dissemination of information from a multiplicity of sources." Turner , 512 U.S. at 662, 114 S.Ct. 2445. Just as the Turner Court held that the must-carry provisions served the government's substantial interest in ensuring that American citizens were able to access their "local broadcasting outlets," id. at 663–64, 114 S.Ct. 2445, the State could argue that S.B. 7072 ensures that political candidates and journalistic enterprises are able to communicate with the public, see Fla. Stat. §§ 106.072(2) ; 501.2041(2)(f), (j). But it's hard to imagine how the State could have a "substantial" interest in forcing large platforms—and only large platforms—to carry these parties’ speech: Unlike the situation in Turner , where cable operators had "bottleneck, or gatekeeper control over most programming delivered into subscribers’ homes," 512 U.S. at 623, 114 S.Ct. 2445, political candidates and large journalistic enterprises have numerous ways to communicate with the public besides any particular social-media platform that might prefer not to disseminate their speech—e.g. , other more-permissive platforms, their own websites, email, TV, radio, etc. See Reno , 521 U.S. at 870, 117 S.Ct. 2329 (noting that unlike the broadcast spectrum, "the internet can hardly be considered a ‘scarce’ expressive commodity" and that "[t]hrough the use of Web pages, mail exploders, and newsgroups, [any] individual can become a pamphleteer"). Even if other channels aren't as effective as, say, Facebook, the State has no substantial (or even legitimate) interest in restricting platforms’ speech—the messages that platforms express when they remove content they find objectionable—to "enhance the relative voice" of certain candidates and journalistic enterprises. Buckley , 424 U.S. at 48–49, 96 S.Ct. 612.
There is also a substantial likelihood that the consistency, 30-day, and user-opt-out provisions ( § 501.2041(2)(b), (c), (f), (g) ) fail to advance substantial governmental interests. First, it is substantially unlikely that the State will be able to show an interest sufficient to justify requiring private actors to apply their content-moderation policies—to speak—"consistently." See § 501.2041(2)(b). Is there any interest that would justify a state forcing, for instance, a parade organizer to apply its criteria for participation in a manner that the state deems "consistent"? Could the state require the organizer to include a group that it would prefer to exclude on the ground that it allowed similar groups in the past, or vice versa? We think not. See Hurley , 515 U.S. at 573–74, 115 S.Ct. 2338. Because social-media platforms exercise analogous editorial judgment, the same answer applies to them. Second, there is likely no governmental interest sufficient to justify prohibiting a platform from changing its content-moderation policies—i.e. , prohibiting a private speaker from changing the messages it expresses—more than once every 30 days. See § 501.2041(2)(c). Finally, there is likely no governmental interest sufficient to justify forcing platforms to show content to users in a "sequential or chronological" order, see § 501.2041(2)(f), (g) —a requirement that would prevent platforms from expressing messages through post-prioritization and shadow banning.
Moreover, and in any event, even if the State could establish that its content-moderation restrictions serve a substantial governmental interest, it hasn't even attempted to—and we don't think it could—show that the burden that those provisions impose is "no greater than is essential to the furtherance of that interest." O'Brien , 391 U.S. at 377, 88 S.Ct. 1673. For instance, §§ 106.072(2) and 501.2041(2)(h) prohibit deplatforming, deprioritizing, or shadow-banning candidates regardless of how blatantly or regularly they violate a platform's community standards and regardless of what alternative avenues the candidate has for communicating with the public. These provisions would apply, for instance, even if a candidate repeatedly posted obscenity, hate speech, and terrorist propaganda. The journalistic-enterprises provision requires platforms to allow any entity with enough content and a sufficient number of users to post anything it wants—other than true "obscen[ity]"—and even prohibits platforms from adding disclaimers or warnings. See Fla. Stat. § 501.2041(2)(j). As one amicus vividly described the problem, the provision is so broad that it would prohibit a child-friendly platform like YouTube Kids from removing—or even adding an age gate to—soft-core pornography posted by PornHub, which qualifies as a "journalistic enterprise" because it posts more than 100 hours of video and has more than 100 million viewers per year. See Chamber of Progress Amicus Br. at 12. That seems to us the opposite of narrow tailoring.
Even worse, S.B. 7072 would seemingly prohibit Facebook or Twitter from removing a video of a mass shooter's killing spree if it happened to be reposted by an entity that qualifies for "journalistic enterprise" status.
We conclude that NetChoice has shown a substantial likelihood of success on the merits of its claim that S.B. 7072's content-moderation restrictions—in Fla. Stat. §§ 106.072(2), 501.2041(2)(b), (c), (f), (g), (h), (j) —violate the First Amendment.
2
We assess S.B. 7072's disclosure requirements—in §§ 106.072(4), 501.2041(2)(a), (c), (d), (e) )—under the Zauderer standard: A commercial disclosure requirement must be "reasonably related to the State's interest in preventing deception of consumers" and must not be "[u]njustified or unduly burdensome" such that it would "chill[ ] protected speech." Milavetz , 559 U.S. at 250, 130 S.Ct. 1324 (citing Zauderer , 471 U.S. at 651, 105 S.Ct. 2265 ).
With one notable exception, it is not substantially likely that the disclosure provisions are unconstitutional. The State's interest here is in ensuring that users—consumers who engage in commercial transactions with platforms by providing them with a user and data for advertising in exchange for access to a forum—are fully informed about the terms of that transaction and aren't misled about platforms’ content-moderation policies. This interest is likely legitimate. On the ensuing burden question, NetChoice hasn't established a substantial likelihood that the provisions that require platforms to publish their standards ( § 501.2041(2)(a) ), inform users about changes to their rules ( § 501.2041(2)(c) ), provide users with view counts for their posts, ( § 501.2041(2)(e) ), and inform candidates about free advertising ( § 106.072(4) ), are unduly burdensome or likely to chill platforms’ speech. So, these provisions aren't substantially likely to be unconstitutional.
This interest likely applies to all of the disclosure provisions with the possible exception of the candidate-free-advertising provision (§ 106.072(4) ). Neither party has addressed that provision in any detail, but it might serve a legitimate purpose in ensuring that candidates who purchase advertising from platforms are fully informed about the "free advertising" that the platform has already provided so that they can make better ad-purchasing decisions. While there is some uncertainty in the interest this provision serves and the meaning of "free advertising," we conclude that at this stage of the proceedings, NetChoice hasn't shown that it is substantially likely to be unconstitutional.
Of course, NetChoice still might establish during the course of litigation that these provisions are unduly burdensome and therefore unconstitutional.
But NetChoice does argue that § 501.2041(2)(d) —the requirement that platforms provide notice and a detailed justification for every content-moderation action—is "practically impossible to satisfy." Br. of Appellees at 49. We conclude that it is substantially likely that this provision is unconstitutional under Zauderer because it is unduly burdensome and likely to chill platforms’ protected speech. The targeted platforms remove millions of posts per day; YouTube alone removed more than a billion comments in a single quarter of 2021. See Doc. 25-1 at 6. For every one of these actions, the law requires a platform to provide written notice delivered within seven days, including a "thorough rationale" for the decision and a "precise and thorough explanation of how [it] became aware" of the material. See § 501.2041(3). This requirement not only imposes potentially significant implementation costs but also exposes platforms to massive liability: The law provides for up to $100,000 in statutory damages per claim and pegs liability to vague terms like "thorough" and "precise." See § 501.2041(6)(a). Thus, a platform could be slapped with millions, or even billions, of dollars in statutory damages if a Florida court were to determine that it didn't provide sufficiently "thorough" explanations when removing posts. It is substantially likely that this massive potential liability is "unduly burdensome" and would "chill[ ] protected speech"—platforms’ exercise of editorial judgment—such that § 501.2041(2)(d) violates platforms’ First Amendment rights. Milavetz , 559 U.S. at 250, 130 S.Ct. 1324.
* * *
It is substantially likely that S.B. 7072's content-moderation restrictions ( §§ 106.072(2), 501.2041(2)(b), (c), (f), (g), (h), (j) ) and its requirement that platforms provide a thorough rationale for every content-moderation action ( § 501.2041(2)(d) ) violate the First Amendment. The same is not true of the Act's other disclosure provisions ( §§ 106.072(4), 501.2041(2)(a), (c), (e) ) and its user-data-access provision ( § 501.2041(2)(i) ).
Nor are these provisions substantially likely to be preempted by 47 U.S.C. § 230. Neither NetChoice nor the district court asserted that § 230 would preempt the disclosure, candidate-advertising, or user-data-access provisions. It is not substantially likely that any of these provisions treat social-media platforms "as the publisher or speaker of any information provided by" their users, 47 U.S.C. § 230(c)(1), or hold platforms "liable on account of" an "action voluntarily taken in good faith to restrict access to or availability of material that the provider considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable," id. § 230(c)(2)(A).
IV
Finally, we turn to the remaining preliminary-injunction factors. Our conclusions about which provisions of S.B. 7072 are substantially likely to violate the First Amendment effectively determine the result of this appeal because likelihood of success on the merits "is generally the most important of the four factors." Gonzalez , 978 F.3d at 1271 n.12 (quotation marks omitted). With respect to the second factor, we have held that "an ongoing violation of the First Amendment"—as the platforms here would suffer in the absence of an injunction—"constitutes an irreparable injury." FF Cosms. FL, Inc. v. City of Miami Beach , 866 F.3d 1290, 1298 (11th Cir. 2017) ; see also Otto v. City of Boca Raton , 981 F.3d 854, 870 (11th Cir. 2020). The third and fourth factors—"damage to the opposing party" and the "public interest"—"can be consolidated" because "[t]he nonmovant is the government." Otto , 981 F.3d at 870. And "neither the government nor the public has any legitimate interest in enforcing an unconstitutional ordinance." Id. Therefore, the preliminary-injunction factors weigh in favor of enjoining the likely unconstitutional provisions of the Act.
* * *
We hold that the district court did not abuse its discretion when it preliminarily enjoined those provisions of S.B. 7072 that are substantially likely to violate the First Amendment. But the district court did abuse its discretion when it enjoined provisions of S.B. 7072 that aren't likely unconstitutional. Accordingly, we AFFIRM the preliminary injunction in part, and VACATE and REMAND in part, as follows:
Provision | Fla. Stat. § | Likely Constitutionality | Disposition |
---|---|---|---|
Candidate deplatforming | 106.072(2) | Unconstitutional | Affirm |
Posts by/about candidates | 501.2041(2)(h) | Unconstitutional | Affirm |
"Journalistic enterprises" | 501.2041(2)(j) | Unconstitutional | Affirm |
Consistency | 501.2041(2)(b) | Unconstitutional | Affirm |
30-day restriction | 501.2041(2)(c) | Unconstitutional | Affirm |
User opt-out | 501.2041(2)(f),(g) | Unconstitutional | Affirm |
Explanations (per decision) | 501.2041(2)(d) | Unconstitutional | Affirm |
Standards | 501.2041(2)(a) | Constitutional | Vacate |
Rule changes | 501.2041(2)(c) | Constitutional | Vacate |
User view counts | 501.2041(2)(e) | Constitutional | Vacate |
Candidate "free advertising" | 106.072(4) | Constitutional | Vacate |
User-data access | 501.2041(2)(i) | Constitutional | Vacate |