Skip to content

California Enacts AI Safety Bill SB 53, Mandating Transparency for Large AI Labs

California Enacts AI Safety Bill SB 53, Mandating Transparency for Large AI Labs
Published:

California Governor Gavin Newsom has signed Senate Bill 53 (SB 53) into law, establishing a first-in-the-nation framework that requires large artificial intelligence (AI) laboratories to enhance transparency regarding their safety and security protocols. The legislation focuses on preventing catastrophic risks associated with advanced AI models, such as their potential misuse in cyberattacks against critical infrastructure or the development of bioweapons.

Under SB 53, AI companies are mandated to disclose and adhere to specific safety and security measures. The Office of Emergency Services is tasked with enforcing these protocols, ensuring that developers maintain their stated safeguards. Adam Billen, Vice President of Public Policy at Encode AI, stated that the bill demonstrates how state regulation can protect innovation while ensuring product safety. Billen noted that many AI firms already conduct safety testing and release model cards, suggesting the bill formalizes existing practices and addresses potential reductions in safety standards under competitive pressure.

Billen cited OpenAI's public statement that it may adjust its safety requirements if rival labs release high-risk systems without similar safeguards as an example of potential industry behavior the bill aims to mitigate. He argues that policy can reinforce companies' existing safety commitments, preventing them from compromising standards due to market or financial pressures.

While public opposition to SB 53 was less pronounced compared to its predecessor, SB 1047, which Newsom vetoed last year, the broader AI industry and venture capital firms have largely expressed concerns that AI regulation could impede technological progress and undermine U.S. competitiveness against nations like China. Entities including Meta and Andreessen Horowitz have reportedly invested heavily in political action committees supporting pro-AI politicians and previously advocated for a federal moratorium on state AI regulation.

Efforts to preempt state laws continue at the federal level, with Senator Ted Cruz introducing the SANDBOX Act, which would allow AI companies to seek waivers from certain federal regulations for up to 10 years. Billen cautioned against narrowly scoped federal AI legislation, suggesting it could undermine federalism in the context of critical emerging technologies. He also argued that state bills, often addressing issues like deepfakes and algorithmic discrimination, do not hinder the U.S. in the AI race with China, suggesting that export controls and ensuring domestic chip supply are more pertinent factors. Legislative proposals such as the Chip Security Act and the CHIPS and Science Act aim to address these areas, though some tech companies have expressed reluctance concerning certain aspects of these efforts.

More in Live

See all

More from Industrial Intelligence Daily

See all

From our partners