On 29 September 2025, Californian Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act, known as SB53, into law. California becomes the first US state to impose specific transparency obligations on AI companies.

The Act provides that:

  • Companies must publish documentation about its safety practices, including how its AI models have been tested for dangerous capabilities.
  • Severe AI-related incidents must be reported to relevant authorities and then disclosed to the public.
  • Stronger legal protections will be afforded to whistleblowers who raise AI safety concerns.
  • The state Attorney-General will issue civil penalties for noncompliance, with fines of up to US$10 million for repeated violations.

The Act applies to large AI developers with gross annual revenues over US$500 million that have trained foundation models using more than 10^26 floating point operations (FLOPs) of computing power. These thresholds currently capture only three models developed by AI giants like OpenAI and xAI. Smaller developers and less established start-ups will not be stymied. However, to keep pace with technology shifts, the California Attorney General will be able to adjust the definition of 'large developer' through regulation.

Training compute of frontier models, 2020-2025

The scope of the Act is further narrowed by focusing on dangerous capabilities, defined as an AI’s ability to:

  • assist in creating a chemical, biological, radiological or nuclear weapon;
  • conduct a cyber attack;
  • take actions that would constitute a crime if they were committed by a human; or
  • evade the control of its developer or user.

Developers are required to manage, assess and mitigate ‘catastrophic risks’ of their AI models killing or seriously injuring more than 50 people or causing more than US$1 billion in property damage or loss.

DownloadLandmark laws for AI transparency

How it came about

In 2024, California passed an earlier bill, SB1047, that sought to impose liability on companies if its AI models caused serious harm. Technology companies vehemently opposed SB1047. Governor Newsom vetoed it, saying that the bill overreached and applied stringent standards on basic AI functions.

Nevertheless, Newsom agreed that guardrails for AI were needed and commissioned an expert report on frontier AI regulation. The report’s recommendations for targeted transparency and whistleblower protections clearly informed the subsequent drafting of SB53.

Democratic state Senator Scott Wiener, who authored both bills, describes SB1047 as a liability law and SB53 as a transparency law that establishes “commonsense guardrails to understand and reduce risk”.

“With this law, California is stepping up, once again, as a global leader on both technology innovation and safety.”
Senator Scott Wiener

Industry reactions

AI companies remain divided on SB53. Anthropic was an early proponent but says it would prefer a consistent federal approach. OpenAI and Meta signalled soft support for the laws but warn that a patchwork of state-based AI regulations could hamper US innovation. Industry lobby groups remain opposed. Venture capital firm Andreessen Horowitz has gone further, arguing that state-based AI laws could violate the Constitution by imposing excessive burdens on interstate commerce.

Overall, industry appears to be chiefly concerned about having to comply with inconsistent or duplicative rules between jurisdictions. SB54 could be the first of many dominoes to fall. Lawmakers in New York and Michigan are considering similar bills, citing the need to regulate AI at the state level if the US Government refuses to do so.

Federal versus state

The White House has repeatedly said it wants to support US companies to innovate and lead the world in AI. Earlier this year, the Trump administration tried to ban states from regulating for 10 years. Republican senators joined with Democrats to block the measure, citing concerns on a range of issues, including states’ rights.

Should more states follow California’s example, pressure will mount on the US Government to enact AI safety laws to reduce regulatory overlap and red tape. Some members of Congress are already pushing in this direction, with Republican Senator Josh Hawley and Democrat Senator Richard Blumenthal introducing a bipartisan bill to establish a federal risk evaluation program for advanced AI systems.

A global precedent

California hosts 32 of the world’s top 50 AI companies. Any AI regulations that California enforces will have global repercussions. Once SB53 takes effect, AI developers, deployers, regulators and users — in the United States and overseas — will benefit from having more public information about frontier AI capabilities and emergent risks. This can then support more evidence-based and proportionate responses to risks.

Governor Newsom is encouraging other jurisdictions to use SB53 as a blueprint for AI laws that balance innovation with public safety. Whether or not it becomes a blueprint, SB53 has already set an important precedent by rebuffing those who claim that frontier AI regulation is intractable, impractical and incompatible with innovation. SB53 tries to address the worst safety concerns — the kind the public reasonably expects big tech to help mitigate — without stifling start-ups and entrepreneurship. That this legislation originated in California, the very heartland of technological progress, sends a strong message that smart regulation should spur innovation, not hamper it.