On June 17, 2020, in a 28-page report released on the topic of online platform liability, the U.S. Department of Justice proposed four material modifications of Section 230 of the CDA:
- Narrowing Section 230’s applicability where the platform is viewed as a bad actor
- Removing Section 230 protection against the government’s civil enforcement actions
- Preventing Section 230 from applying in antitrust cases
- Limiting the ability of platforms to moderate content except where the content is “obscene, lewd, lascivious, filthy, excessively violent, harassing.”
Simultaneously, Senator Josh Hawley (R-MO) introduced the “Limiting Section 230 Immunity to Good Samaritans Act,”1 which seeks to significantly curtail Section 230 safeguards for “edge providers” – large tech companies that (a) exceed $1.5 billion in global revenue and (b) have either 30 million US monthly users or more than 300 million monthly global users.
The DOJ report follows a 10-month period of analysis, working group meetings and open workshops to identify problems related to Section 230 and offer solutions. The solutions proposed in sections 1, 2 and 4 of the report are fairly controversial. In section 1, the DOJ report targets platforms viewed as “Bad Samaritans,” stripping platforms of Section 230 protections in cases involving child abuse, terrorism and cyber-stalking, and also removing immunity for platforms with actual knowledge “or notice” that the content may violate federal criminal law. In Section 2, the DOJ’s recommendations would grant the government free reign to bring future civil enforcement actions against platforms, in which Section 230 would not apply at all.
In Section 4, the DOJ proposes modifying subsection (c)(2)(a) of Section 230, which currently allows platforms to maintain immunity while being able to remove content for a number of purposes, including content that is “otherwise objectionable.”2 The report proposes replacing “otherwise objectionable” with “unlawful” and “promotes terrorism.” Thus, removing content for any reason other than pornography, violence or harassment could potentially create liability for the platform, as it would no longer be immunized under Section 230. In simple terms, it would eliminate the ability to safely remove propaganda, disinformation, white supremacy rhetoric and other hate speech, or to stop election meddling, among other things. The report also recommends removing liability protection if the platform was not acting under a new definition of “in good faith.” Under their approach, good faith would be defined as: (1) explaining content moderation practices in its terms of service; (2) acting consistently with those terms and other official representations about content moderation practices; (3) limiting content removal to the specific criteria set forth in subsection (c)(2)(a); and (4) notifying the original poster/provider of the basis for the moderation activity except in the case of criminal activity or imminent harm.
Under this bill, for edge providers to maintain Section 230 protection, they would be required to describe the ways in which they moderate content and promise to operate their service in good faith. If they are ever found not to have acted in good faith in a lawsuit brought by a user, the edge provider would lose immunity and be liable for the greater of actual damages or $5,000, as well as attorney’s fees for any suit to enforce their promises.
Acts would be defined as “in good faith” if the provider “acts with an honest belief and purpose, observes fair dealing standards, and acts without fraudulent intent.”
Acts are not in good faith if the provider engages in:
- Selective enforcement. A provider may not intentionally selectively enforce the terms of service, including selective enforcement of moderating content.
- Selective enforcement by algorithm. A provider may not knowingly or recklessly disregard the fact that an algorithm selectively enforces terms of service, including content moderation.
- Breached promises. Any intentional failure to honor the provider’s public or private promises. This provision is especially noteworthy, as any potential breach of the terms of service in itself could open the provider up to civil liability.
- Any other act without an honest belief and purpose, without observing fair dealing standards, or with fraudulent intent.
The Hawley bill sets forth a vague and subjective standard. It would essentially strip away Section 230 protection for large tech companies because the ability to maintain Section 230 immunity would be at risk in every litigation and dependent on judicial interpretation of extremely vague wording. It would most likely lead to the tech sector reverting to the so-called “moderator’s dilemma,” i.e. the extreme of strict censorship or a hands-off approach to all content.
The DOJ proposals are more moderate than the Hawley Bill, but the proposals apply to all providers and substantially carve-back the immunity currently offered by the statute. The Hawley Bill would essentially strip immunity from the largest platforms, while leaving the smaller platforms untouched. Both accomplish the same ends – increase the liability of platforms for the conduct of their users and remove the ability of certain platforms to use their discretion as to what content to moderate on their service.
Quick Impact Analysis
Although it is unlikely that Congressional Democrats would support Senator Hawley’s bill given the backlash against President Trump’s May 28, 2020 Executive Order, bipartisan calls for Section 230 reform and bills to amend and curtail the law’s protections have cropped up continuously for the past year. Before that, the last major amendment to Section 230, FOSTA/SESTA, passed easily with strong support from both parties. As the Trump Administration and Congress continue to take a hard look at Section 230, platform providers should be on alert for further legislation based on DOJ’s legislative proposals and consider being prepared to engage promptly in lobbying efforts.
1 The bill is co-sponsored by Sens. Rubio (R-FL), Braun (R-IN), Cotton (R-AK).
2 More specifically, Section 230(c)(2) grants immunity for platforms restricting access to material that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”