Highlights
- California’s SB 53 introduces landmark transparency and safety requirements for “frontier” AI models, imposing new obligations on developers related to public disclosures, risk assessments, and incident reporting.
- Large frontier developers must publish and annually update a comprehensive “frontier AI framework”, report catastrophic risk assessments to the state, and implement whistleblower protections.
- SB 53 is part of a broader trend in AI regulation at the state level, with some states considering parallel bills targeting frontier models and others focused on other high-risk AI applications—such as companion chatbots, mental health treatment, and automated decision-making.
California’s recently enacted SB 53, the Transparency in Frontier Artificial Intelligence Act (“TFAIA”) which is one of the first pieces of state-level legislation aimed squarely at safety around “frontier” AI models. These are foundational models trained using a quantity of computing power greater than 10^26 integer or floating-point operations (“FLOPs”), and their regulation has been a point of intense legislative debate in California.
SB 53 draws extensively from the recommendations found in the California Report on Frontier AI Policy, a working-group expert report commissioned by Governor Gavin Newsom in 2024 after his veto of SB 53’s more aggressive predecessor, SB 1047. SB 53 notably does not include many of SB 1047’s stricter requirements, including pre-training safety procedures, annual third-party audits, a 72-hour reporting window for safety incidents, full shutdown capabilities for covered models, and sizeable penalties calculated as a percentage of computing cost.
Nevertheless, SB 53, which takes effect on January 1, 2026, imposes new transparency, safety, and whistleblower obligations on frontier developers and establishes new reporting requirements associated with risk assessments and critical safety incidents. Some of the bill’s provisions apply to all frontier developers, while others specifically target “large frontier developers” (those with annual gross revenues over $500 million dollars).
Transparency Reports
- When they release a new or substantially modified version of an existing frontier model, all developers must publish a transparency report which includes the developer’s website, a mechanism to communicate with the developer, the release date, the languages supported, modalities of output, intended uses of the model, and any generally applicable restrictions or conditions.
- Large frontier developers must make additional disclosures, including summaries of catastrophic risk assessments, the results of those assessments, and whether third-party evaluators were involved. Publication of a system or model card can fulfill these obligations.
- With respect to transparency documents, developers may make necessary redactions to protect trade secrets, cybersecurity, public safety, or national security.
AI Frameworks
- Large frontier developers must publish a “frontier AI framework” on their websites, detailing how they assess models for catastrophic risk, mitigate potential risks, implement cybersecurity practices, and institute internal governance controls. This framework must be reviewed and, if necessary, updated annually.
Reporting & Accuracy Requirements
- Large frontier developers must report catastrophic risk assessments from internal model use to the California Office of Emergency Services (“OES”) and are prohibited from making materially false or misleading statements about those risks, their management, or compliance with their framework. Frontier developers more broadly are prohibited from making materially false or misleading statements about catastrophic risk from their frontier models or their management of catastrophic risk but are not subject to the same reporting requirements.
Critical Safety Incident Reporting
- Utilizing a reporting mechanism to be established by the OES, all frontier developers must report any critical safety incident pertaining to one or more of its frontier models within 15 days (and within 24 hours in some cases). Critical safety incidents are defined as model behavior that results in or materially risks death, serious injury, or loss of control over the system.
Whistleblower Protections
- All frontier developers are prohibited from adopting rules or policies that prevent a covered employee, whether openly or anonymously, from disclosing information if the employee has reasonable cause to believe that (i) the developer’s activities pose a danger to the public health or safety or (ii) that the frontier developer has violated the TFAIA.
Penalties and Reporting
- Large frontier developers that fail to comply with SB 53’s requirements may be subject to a civil penalty of up to $1 million per violation. Beginning January 1, 2027, OES must produce an annual report with anonymized and aggregated information about critical safety incidents. Also starting January 1, 2027, the Attorney General must produce an annual report with anonymized and aggregated information about reports from covered employees.
SB 53 also creates the CalCompute Consortium to design a state-backed public cloud computing cluster designed to advance the development and deployment of safe, ethical, equitable, and sustainable AI. The consortium must deliver a report to the California Legislature by January 1, 2027, with a proposed design and funding framework for this cluster.
The law also recognizes the fast-changing landscape for AI, and includes requirements for annual review of the definitions of “frontier model,” as well as developer and large developer to ensure they align with any federal or international usage. The law also addresses potential future risks from smaller systems and recommends future safety legislation not just scoped by computing power.
Key Takeaways for Frontier Developers
For organizations developing, deploying, or integrating frontier AI models, SB 53 may require an overhaul of existing transparency and risk management policies. Existing published safety protocols or model cards may need to be updated to comply with the level of specificity mandated by SB 53, and large frontier developers will need to compile their policies under a new unified frontier AI framework and ensure rigorous compliance with its requirements, as misleading statements or disclosures may now result in civil liability. Specifically:
- Frontier developers must establish internal anonymous reporting channels for covered employees to flag any catastrophic risk concerns and ensure that covered employees are notified of their rights in accordance with the bill’s requirements.
- Developers must create mechanisms to monitor for and report critical safety incidents.
- Trade-secret redactions in transparency disclosures are allowed, but companies must be careful to avoid excessive redaction that may undercut their disclosures’ efficacy. Legal teams will need to assess which portions of internal protocols or test results can be safely withheld.
SB 53 in Context
SB 53 arrives amid a flurry of AI regulatory and legislative activity at the state level. Over the past few months, state legislatures have been focused on regulating the development and use of AI in areas with the greatest perceived risk—whether that be frontier models, companion chatbots with mental health implications, or models contributing to decisions in consumer contexts. Some of these are similarly focused on foundation models, such as:
New York
In July, the New York legislature passed the Responsible AI Safety and Education (RAISE) Act (S6953B) which also applies to frontier models, and mandates that developers implement safety and security protocols to evaluate and mitigate “critical harm.” They must publish a redacted version of their protocols, provide unredacted versions to state officials upon request, and disclose safety incidents to the Attorney General within 72 hours. The proposed fines available to the New York Attorney General dwarf those in SB 53, with fines up to $10 million for the first violation and up to $30 million for repeat violations. The bill currently awaits the governor’s signature, though there remain questions as to its chances of being enacted.
Michigan
Michigan lawmakers recently introduced the AI Safety and Security Transparency Act (HB 4668) which establishes a robust AI safety and transparency regime for developers of high-cost foundation models, emphasizing catastrophic risk prevention through public protocols, audits, and regular reporting. As compared to SB 53, HB 4668 imposes broader public transparency obligations, independent audits, and whistleblower protections, adopting a more comprehensive and externally accountable approach to AI governance. The bill is currently being considered by relevant committees.
Colorado
Instead of focusing on foundational models, the Colorado AI Act (CAIA) (SB 24-205) regulates artificial intelligence systems used in specified high risk contexts that substantially contribute to consequential decisions involving consumers (e.g., education, employment, lenders, health-care). The CAIA requires that developers of these high-risk systems use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. Like SB 53, CAIA requires developers to make certain disclosures about their models to users and state officials.
Other AI laws passed recently have focused on foundational models in specific contexts, including: California SB 243 (at governor for signature) that regulates companion chatbots, especially related to minors; New York AI Companion Law, passed in May 2025 that focuses on interactive AI systems, and addresses disclosures, self-harm detection, and addictive design; the Illinois Wellness and Oversight for Psychological Resources Act that bans autonomous AI therapy; and the Utah AI Policy Act requiring disclosure when generative AI is used in communications with consumers.
As state legislatures continue to move aggressively into AI governance, developers need to prepare for a regulatory space defined by formal transparency protocols, regular internal or external audits, and stringent reporting requirements. Even if an organization isn’t directly covered today, these laws are likely to shape contractual norms, investor diligence, and customer expectations.