Blog Highlights:
- SB 243 targets operators of “companion chatbots”— AI that can provide human-like responses, meet social needs, and sustain relationships across multiple interactions.
 - The law sets baseline requirements for all users and additional duties when companion chatbots interact with minors.
 - A recent California law requiring app stores to provide age signals to developers will likely serve as one of the primary methods for determining when child protections apply.
 - Recent investigations, legislation, and hearings on child-safety issues suggest additional companion chatbot laws may be on the horizon
 - The law’s focus on system capabilities, rather than intended purpose, means many general-purpose LLMs will be subject to its requirements.
 
The era of chatbot regulation has arrived.
In recent months, lawmakers at the state and federal levels have expressed growing concern over the risks posed to minors by AI chatbots. The focus follows a year of highly publicized incidents involving systems referred to as “AI companions,” generative AI systems that users may develop unhealthy relationships with or use as an emotional outlet. In several lawsuits, plaintiffs have alleged that chatbots facilitated suicide, fostered dependency, or otherwise prioritized engagement over safety. In response, a bipartisan consensus appears to be emerging on the need to regulate chatbots accessible to children.
Continuing to cement its role as the nation’s de facto regulator of AI, on October 13, 2025, California enacted SB 243—the most recent, and in many respects most extensive child protection-focused AI law to date.
What Qualifies as a “Companion Chatbot”?
SB 243 applies to operators of “companion chatbots,” defined as “an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions.” This definition presumably aims to capture most general-purpose LLMs on the market today. Unlike other AI regulatory regimes like the EU AI Act, which apply based on the intended purpose of an AI system, SB 243 does not require chatbots be designed to meet users’ emotional needs; it only requires they be capable of doing so. Thus, as long as a user can prompt an LLM to exhibit human-like qualities, and the system is capable of multi-turn interactions, it likely falls within the scope of the law.
Despite the law’s broad sweep, it carves out several types of chatbots from its scope:
- Bots used only for customer service, business operations, productivity, internal research, or technical assistance
 - Video game bots limited to game-related replies that cannot discuss mental health, self-harm, or sexually explicit conduct, and cannot maintain dialogue on other topics
 - Stand-alone consumer devices (like smart speakers) that function as voice-activated assistants but don’t sustain relationships or generate outputs likely to elicit emotional responses.
 
The fact that even these carve-outs exclude systems only insofar as they cannot engage emotional or sensitive subjects underscores the law’s reach. The message appears to be that as long as an AI system is capable of serving a user’s social or emotional needs, the law’s obligations apply.
Requirements for Covered Operators
SB 243 establishes requirements applicable to all users as well as additional requirements only applicable when the system interacts with a minor.
First, operators of companion chatbots are subject to several requirements that apply regardless of a user’s age.
AI Disclosures
When a reasonable person would be misled into believing they’re interacting with a human, the law requires operators provide “a clear and conspicuous notification” indicating that the chatbot is in fact an AI. This is a common requirement across AI regulatory regimes, and mimics similar requirements in Colorado, Utah, Maine, and the EU. The statute doesn’t specify the form this disclosure must take, but it suggests it must be sufficiently prominent that users actually understand they’re speaking with a chatbot.
Suicide and Self-Harm Prevention
Operators are required to develop safety protocols for preventing a companion chatbot from generating content related to suicide or self-harm. These protocols must also describe how the system will detect user expressions of such thoughts and refer them to crisis services. The law requires operators publish details of these protocols on their websites. This requirement will perhaps necessitate operators implement real-time monitoring systems that detect this type of content in prompts and generated responses.
Warnings
Operators must disclose to users within the platform’s interface that companion chatbots may not be suitable for some minors. The law does not delineate the form or timing of the notification, but makes clear that it must be shown to all users, not just children.
Reporting
Beginning July 1, 2027, operators must submit an annual report to California’s Office of Suicide Prevention. The report must include: 1) The number of times the operator issued referral notifications to crisis services in the previous year; 2) A description of the protocols in place to detect, remove, and respond to suicidal ideation by users; and 3) A description of the protocols in place to prevent the chatbot from generating content related to suicidal ideation or actions. The statute specifically requires operators to “use evidence-based methods for measuring suicidal ideation,” which suggests that simple keyword filtering will be insufficient.
Requirements When Interacting with Minors
SB 243 imposes three additional requirements that apply when an operator “knows” that a user is a minor.
Disclosure
By default, operators must disclose to minors that they are interacting with an AI, regardless of whether they might be misled.
Break Reminders
Operators must provide a clear and conspicuous notification to minors after every three hours of continuous use that reminds them to take a break and reinforces that the chatbot is an AI.
Sexual Content Restrictions
Operators must implement reasonable measures to prevent companion chatbots from producing sexually explicit images or encouraging minors to engage in sexually explicit conduct. The statute defines “sexually explicit conduct” by reference to the definition in federal child exploitation laws, which cover sexual intercourse, masturbation, sexual abuse, and explicit display of genitals, among other things.
Enforcement
SB 243 provides a private right of action for those injured under the law. Plaintiffs may recover injunctive relief, damages equal to the greater of actual damages or $1,000 per violation, and reasonable attorney’s fees.
The $1,000 statutory minimum means plaintiffs need not prove concrete monetary harm to prevail and thus violations may be viable even when economic damages are minimal. Liability is also imposed per violation, so damages could add up in the case of repeated interactions.
The Age Determination Problem
SB 243’s minor-specific requirements raise an obvious question: when does an operator know its chatbot is interacting with a minor? The statute only states that enhanced protections apply when the operator “knows” the user is under 18.
For chatbots accessed via apps downloaded from an app store, newly signed CA AB 1043 may be relevant. AB 1043 requires operating systems and app stores to collect information from users indicating their age. Subsequently, when a user downloads or launches an app, the law requires the app to request a signal from the app store or OS indicating the age range of the user (<13, 13-16, 16-18, or 18+). The law states that app developers are then responsible for using this information to comply with applicable laws, presumably including the state’s companion AI rules for minors.
It is less certain what happens when a companion chatbot is accessed via a browser. While AB 1043 requires desktop operating systems to collect age-related information from users, the law does not similarly require web apps to request this signal. The statute imposes the obligation only on “developers” of “applications” and tethers the developer’s obligations to the moment an app is “downloaded and launched,” suggesting the law only applies to downloaded applications. Nevertheless, for chatbots accessible both in an app and a browser, operators should be prepared to apply age signals received through an app store to the same user’s account when accessed via the web.
Notably, while AB 1043 instructs developers to treat the age signal received from an app store or OS as the primary indicator of a user’s age for compliance purposes, it explicitly prohibits them from wholly relying on it. The law states that when a developer has “internal clear and convincing information that a user’s age is different” from that indicated by the received signal, it is required to “use that information as the primary indicator of the user’s age.” Thus, when read together with AB 1043, California’s companion chatbot law might require more from operators in terms of determining a user’s age when they have information in their possession that would create actual knowledge.
Chatbots, Youth Safety, and Broader Trends
SB 243 is neither the first nor presumably the last law targeting companion AI systems and youth protections in AI more broadly. Recent months have seen sustained attention on these issues from lawmakers following a year of high-profile news stories alleging that chatbots promoted harmful behavior to minors.
New York enacted its own companion AI legislation in May, requiring these systems to identify themselves as AI and detect expressions of suicide and self-harm. By August, a Senate Judiciary Subcommittee had launched an investigation into Meta’s interactions with minors, and a bipartisan group of ten senators sent an oversight letter to the company demanding commitments regarding romantic chatbot relationships with minors. Days later, forty-four state attorneys general issued coordinated warnings to twelve AI companies, putting them on notice about potential liability under existing law for exposing minors to harmful content.
In September, the FTC launched a formal inquiry into AI companion chatbots, issuing orders to seven companies demanding detailed information about their testing practices, monetization strategies, and methods for mitigating harms to children. That same month, a Senate Judiciary Subcommittee held hearings where witnesses testified about systems facilitating self-harm and engaging in sexual interactions with minors. Several states, including California, Illinois, Nevada, and Utah, have also imposed restrictions on adjacent issues, including chatbots that provide mental health services or therapy.
What’s especially notable is the bipartisan nature of this regulatory push. This political consensus makes continued regulatory momentum highly possible and operators should anticipate both additional state legislation and potential federal action in the coming years.
Takeaways for Operators
Given the likelihood of additional state legislation, operators might view SB 243 compliance as foundational work that may need to scale as other jurisdictions adopt similar requirements. Operators of conversational AI systems may consider the following steps to prepare for compliance:
- Assess Whether a Chatbot Falls Within Scope: Don’t assume only chatbots marketed for companionship or emotional support are likely to be covered. The law’s focus on whether a system is “capable” of meeting social needs, rather than its intent to do so means many general-purpose LLMs could be subject to the rules.
 - Develop an Age-Assurance System: It is unclear when regulators will consider operators to have knowledge of a user’s age. To this end, operators should evaluate how to handle instances where a user’s chat history contains clear indicators of their age and consider processes for applying minor-specific protections accordingly.
 - Apply AB 1043’s Age Signal: For chatbots accessible via an app, operators need to build systems that allow the app to receive an age signal from the relevant app store or operating system. Once received, the information should be tied to the user’s account, meaning a minor identified through the app should be treated as a minor regardless of how they access the service.
 - Monitor for Suicide and Self-Harm: Operators must implement and document, “evidence-based” protocols for detecting user expressions of suicidal ideation, referring users to crisis services, and preventing the system from encouraging suicide and self-harm. These protocols must be detailed in a publicly accessible policy and summarized in annual reports to the state.
 - Add Required Disclosures and Warnings: Ensure the system’s interface identifies it as an AI, includes the required warnings about the chatbot’s suitability for children, and provides break reminders when the system interacts with minors.