FTC & State AG

New California Content Moderation Rules Highlight Legislative Focus on Social Media Companies

Published: Mar. 04, 2024

Social media companies, well aware of the power of trends, are themselves trending in state legislatures across the U.S. as lawmakers seek to regulate content moderation practices. California recently joined this effort with AB 587, which went into effect on January 1, 2024. This law is less controversial and onerous than its predecessors in Texas and Florida because it does not impose viewpoint-based requirements, but it nonetheless creates new compliance burdens for regulated companies.

Broadly, AB 587 requires “social media companies” (as defined in the law) to disclose designated information about their content moderation practices in their terms of service and in semiannual reports to the California Attorney General (“CAG”). Companies offering online services or apps with a social component should evaluate whether they are subject to the law and, if so, how to update their terms and otherwise comply with reporting requirements.

What Companies Are Covered?

AB 587 applies to “social media companies,” which are companies that own or operate one or more “social media platforms.” 

A “social media platform” is “a public or semipublic internet-based service or application” with California users that: 

  • Has as a substantial function connecting users so they can interact socially on the service/app; and
  • Allows a user to: 
    • create a public or semipublic profile to sign in and use the service/app; 
    • populate a list of other users with whom the user shares a social connection on the service/app; and
    • create or post content (i.e., statements or comments) that other users can view (e.g., via chat rooms, message boards, or content feeds). 

Exempted from the law, however, are services or apps that permit users to interact only through direct messaging functionalities.

AB 587 only applies to a social media company that generated $100 million or more in gross revenue the preceding year. The law does not specify that this revenue amount must have been generated by the social media platform(s) owned or operated by the company, so on its face, AB 587 applies to a social media company meeting the revenue threshold even if only a small portion of its revenue derives from its social media platform(s). 

In addition, AB 587 does not require a social media platform to be the main service offered by a social media company. A company can own or operate a social media platform—thus meeting the definition of social media company—even if that platform is a small part of the company’s services. However, the transparency requirements only apply to social media platform(s) owned by the company, not to the company as a whole. 

Transparency Requirements

Under AB 587, social media companies must, in their terms of service, describe how users can flag content, groups, or other users that they believe violate the company’s terms, and state the company’s timing for responding to and resolving these flags (though there is no legally prescribed time to respond). The terms must also describe the actions the company may take in response to content or users that violate the terms such as removal, banning, demonetization (e.g., removing or limiting revenue-generating ad content in a post), or deprioritization (e.g., causing a post to appear less prominently in users’ content feeds). Social media companies must post their terms “in a manner reasonably designed” to inform users of the terms’ “existence and contents” (i.e., not in a hidden or inconspicuous location).

In addition to an initial report (which was due January 1, 2024), social media companies must also submit “terms of service reports” to the CAG twice per year: by April 1 (for reports covering the third and fourth quarters of the preceding year) and October 1 (for reports covering the first two quarters of the year).

The CAG has set up an online reporting form in which the company must describe its content moderation practices. For a full list of required information, companies should consult AB 587. To highlight some key requirements, social media companies must:

  • State whether their terms define hate speech or racism, extremism or radicalization, disinformation or misinformation, harassment, or foreign political interference, and provide the corresponding definitions (collectively, “prohibited” content).
  • Report the number of instances content was flagged as potentially prohibited that resulted in company action and, interestingly, the number of times users viewed or shared flagged content before the company took such action.  The reported information must be broken down by category of item flagged (e.g., post, comment, message, or user), type of media (e.g., text, image, or video), category of prohibited content (e.g., hate speech, extremism, or disinformation), how the content was flagged (e.g. by a user or an automated system), and the action taken.  
  • Provide a detailed description of the company’s content moderation practices, including but not limited to policies addressing prohibited content, how automated content moderation systems operate on the platform, how the company responds to reports of violations of its terms, and what actions it will take in response to content, users, or groups responsible for such violations.

Companies subject to the law should think carefully about their responses as the reports are published by the CAG and available to the public online.  Moreover, to the extent companies do not currently track the metrics that must be reported, they should consider how to capture the required information going forward. 

The CAG has exclusive enforcement authority under AB 587, and in addition to bringing enforcement actions, it can impose penalties of $15,000 per violation per day for noncompliance. 

AB 587 In Context

California AB 587 is not the first—and probably is not the last—content moderation law of its kind. Absent federal mandates, content moderation is an area states may regulate as they see fit subject only to constitutional limits. And these limits seem unlikely to offer companies relief from transparency requirements like those in AB 587. 

For background, earlier content moderation laws passed by Texas and Florida in 2021 not only imposed transparency requirements like those in AB 587, but also limited social media companies’ ability to remove, edit, or arrange (e.g., prioritize) user content based on the viewpoints expressed. 

While the transparency requirements have survived judicial scrutiny, courts are divided about the constitutionality of the viewpoint-based restrictions. The Fifth Circuit upheld these provisions in Texas’s law, but the Eleventh Circuit struck them down in Florida’s law, holding restrictions on social media platforms’ abilities to curate, arrange, and moderate content violated the companies’ First Amendment rights to exercise editorial judgment. The Supreme Court is reviewing the appellate courts’ holdings about viewpoint-based requirements, but not their decisions to uphold the transparency requirements, which seem to have passed constitutional muster. 

Because AB 587 only includes transparency requirements, it seems similarly likely to withstand legal challenges on constitutional grounds. The law has already survived X Corp’s (formerly Twitter’s) effort to preliminarily enjoin it, and though it may face further challenges, First Amendment arguments seem unlikely to succeed.

Putting AB 587 in even broader context, transparency requirements are just one feature of an increasingly complex compliance landscape for social media companies. State legislatures have recently proposed and passed laws imposing a slew of other requirements, including obligations related to age verification, parental consent, teen’s data, and child sexual abuse material. Companies may be well-served by focusing their compliance efforts on laws like AB 587, which have survived legal challenge and gone into effect, while keeping an eye on laws with less certain futures.