Home / Technology / California Enacts Landmark AI Regulation, Requiring Disclosure of Risks
California Enacts Landmark AI Regulation, Requiring Disclosure of Risks
30 Sep
Summary
- California Governor Gavin Newsom signs law mandating AI companies disclose risk mitigation plans
- Law applies to firms with over $500 million in revenue, allows fines up to $1 million per violation
- Aims to fill gap left by lack of federal AI legislation, seen as model for other states

On Monday, October 1st, 2025, California Governor Gavin Newsom signed into state law a groundbreaking requirement that AI companies with over $500 million in revenue, including OpenAI, Google, Meta, Nvidia, and Anthropic, disclose their plans to mitigate potential catastrophic risks from their cutting-edge AI models.
The new law, known as SB 53, is intended to fill a gap left by the lack of federal AI legislation in the United States. Newsom's office stated that the law provides a model for the rest of the country to follow, as states like Colorado and New York have also recently enacted their own AI regulations.
Under the law, companies must assess the risks that their AI technology could break free of human control or aid in the development of bioweapons, and publicly disclose those assessments. Violations can result in fines of up to $1 million.
Newsom emphasized that the law aims to strike a balance between public safety and continued innovation in the growing AI industry, which is critical to California's economy. Anthropic co-founder Jack Clark called the legislation "a strong framework" that achieves this goal.
However, the industry still hopes for a federal framework that would replace the patchwork of state-level regulations. Some Republicans in Congress are working on AI legislation that could preempt state laws, while Democrats are also discussing enacting a federal standard.