ai bill california

California proposed a bill to regulate AI and mitigate potential risks, but was eventually vetoed. Based on good intentions, that bill was an example of an AI law that didn’t balance well safety with innovation – weighting too heavily on the former. In this article, I explore what was this bill trying to fix, its shortcomings and what’s ahead. I also compare it with the EU AI Act at a high level, to more clearly understand how they differ, and the different rationales behind the two approaches.

The Stakes: Addressing the Rapid Evolution of AI

California’s proposed AI bill, officially titled the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), was a significant move towards establishing safety measures for large-scale AI models. As AI technologies evolve at an unprecedented pace, the potential risks they pose become ever more apparent. The bill sought to mitigate these risks by requiring AI companies to test their models rigorously and publicly disclose safety protocols. These measures aimed to prevent scenarios where AI could be manipulated to disrupt critical infrastructure, create chemical weapons, or cause other catastrophic events.

Authored by Democratic Senator Scott Wiener, the bill targeted systems costing over $100 million to train — a threshold no current AI models have reached. The aim was to set foundational safety standards, creating a framework that other states and potentially the federal government could follow. This proactive approach reflected growing concerns about AI’s capabilities and the need for AI regulation.

Support and Opposition: Safety vs. Innovation

The bill received mixed reactions. Supporters, including various AI safety advocates and some tech leaders, argued that it would set much-needed ground rules for an industry advancing rapidly. Notable supporters included Anthropic, an AI startup backed by Amazon and Google, and tech mogul Elon Musk. Proponents emphasized the importance of innovation coexisting with safety measures, highlighting the potential catastrophic risks of unregulated AI development.

However, the bill also faced significant opposition from tech giants such as OpenAI, Google, and Meta. Critics argued that safety regulations should be established at the federal level rather than by individual states. They contended that the bill’s stringent requirements could stifle innovation, imposing hefty compliance costs that might hinder the growth of AI technologies. Former House Speaker Nancy Pelosi and other California House members also opposed the bill, citing concerns about overregulation and its potential impact on the tech industry.

Governor Newsom’s Decision: Veto

On September 29, 2024, Governor Gavin Newsom vetoed SB 1047, explaining that while the bill was well-intentioned, it did not differentiate adequately between AI systems based on their risk levels. Newsom argued that the bill applied stringent standards to even the most basic functions of large AI systems, which could give the public a false sense of security. He expressed concern that smaller, specialized models might pose equal or greater risks than those targeted by the bill, suggesting that the legislation’s broad scope was not the best approach to protecting the public.

In his veto message, Newsom highlighted the importance of safety protocols and clear consequences for misuse of AI. However, he emphasized the need for a more balanced approach, informed by an empirical analysis of AI systems and capabilities. This perspective underscores the challenge of regulating a technology that is both rapidly advancing and difficult to predict. I understand and agree that the bill was well meaning but flawed.

The Global Context: Comparing California’s AI Bill with the EU AI Act

California’s AI bill drew comparisons to the European Union’s AI Act, which has been hailed as a pioneering effort in regulating AI technologies. The EU AI Act categorizes AI systems based on risk levels, applying stricter regulations to high-risk applications such as healthcare and critical infrastructure. This risk-based approach ensures that the most potentially harmful AI systems undergo rigorous testing and compliance.

The California bill, in contrast, focused specifically on large-scale AI models requiring significant data for training. While it set clear thresholds and detailed safety measures, it was too forward-looking, targeting models that do not yet exist. While it could be considered as a necessary step to future-proof AI regulation and address potential risks before they become realities, it would have created actual adverse consequences on innovation because of potential risks.

The debate over AI regulation reflects broader global concerns about balancing innovation with safety. Effective regulation can create a framework that protects the public while allowing AI to develop responsibly. California’s approach, whether through SB 1047 or other future initiatives, will likely influence global standards and practices in AI regulation.

The Road Ahead: Collaborative Efforts and Future Regulations

Following the veto, Governor Newsom announced plans to collaborate with AI experts to develop “guardrails” that balance innovation with safety. This collaborative approach aims to find a middle ground that mitigates risks without stifling technological progress. Newsom’s decision to veto SB 1047 does not signal an end to AI regulation efforts in California; rather, it highlights the need for a nuanced and flexible regulatory framework.

California’s ongoing efforts to regulate AI will be closely watched by other states and countries. As a global tech leader, California’s decisions set precedents that can shape the future of AI regulation worldwide. The state’s approach to balancing innovation and safety will serve as a model for other regions grappling with similar challenges.

The future of AI regulation lies in proactive, informed decision-making that incorporates diverse perspectives from industry experts, policymakers, and the public. By implementing reasonable safeguards, we can unlock the full potential of AI while ensuring that its development benefits society as a whole. Public engagement and education are crucial in this process, as informed citizens can contribute to shaping policies that reflect shared values and priorities.

AI & Data Privacy Compliance
emmanuel-pernot-leplay

In this blog you'll find my articles about AI and privacy regulations from the EU, the US and China.