California’s AI Safety Law Sets a New Standard for Responsible Innovation
California has once again positioned itself at the forefront of technology governance with the signing of Senate Bill 53 — a groundbreaking Artificial Intelligence (AI) safety and transparency law — by Governor Gavin Newsom this week. The legislation, hailed as a balance between innovation and accountability, aims to ensure that AI development remains safe, secure, and socially responsible.
According to Adam Billen, Vice President of Public Policy at Encode AI, a youth-led advocacy group, the new law proves that progress and regulation can coexist. Speaking on TechCrunch’s Equity podcast, Billen emphasized that effective policy does not have to stifle creativity.
At its core, SB 53 requires major AI developers to be transparent about their safety and security procedures — particularly those designed to prevent the misuse of AI systems in cyberattacks or bioweapon creation. The law also mandates adherence to those safety standards, with enforcement overseen by the California Office of Emergency Services.
Billen noted that many companies already conduct internal safety testing and publish model reports, but some have begun relaxing those measures under competitive pressure.
The issue of competition-driven safety compromises has become increasingly visible in the AI sector. OpenAI, for instance, has previously indicated it might adjust its own safety protocols if rivals released advanced models without equivalent safeguards. Billen argues that clear legal frameworks can prevent companies from backtracking on their safety promises in the race for dominance.
While opposition to SB 53 was minimal compared to its predecessor, SB 1047 — which Governor Newsom vetoed last year — some voices in Silicon Valley continue to argue that regulation slows innovation and could weaken America’s technological advantage over China. Major industry players such as Meta and influential venture capitalists like Andreessen Horowitz have reportedly funneled millions into political campaigns favoring a deregulated AI landscape.
Earlier this year, several of these forces supported a proposed 10-year moratorium that would have barred states from enacting AI-related laws. Encode AI, in partnership with over 200 organizations, successfully campaigned against the measure. Still, the battle over AI regulation continues.
Senator Ted Cruz recently introduced the SANDBOX Act, which would allow AI companies to apply for waivers exempting them from certain federal regulations for up to a decade. Billen warned that such federal-level efforts could undermine state authority. “If
Billen acknowledged that the global AI race — particularly with China — demands strategic policy support but maintained that dismantling state-level safeguards is not the solution.
He further suggested that genuine competitiveness requires investment in areas like semiconductor export controls and domestic chip production — not the rollback of consumer protection laws. Legislative proposals such as the Chip Security Act and the CHIPS and Science Act aim to strengthen those areas, though major tech companies like OpenAI and Nvidia have expressed concerns about potential trade limitations.
In recent months, U.S. policy toward AI chip exports to China has fluctuated, with the Trump administration initially expanding restrictions before partially reversing them in exchange for a share of revenue. Billen argued that while Washington debates export policies, tech lobbyists continue to push narratives that undermine modest state regulations like SB 53.
Despite the industry resistance, Billen believes SB 53 demonstrates the effectiveness of democratic collaboration between policymakers and technology leader
Source: Techcrunch
news via inbox
Get the latest updates delivered straight to your inbox. Subscribe now!

