Republicans have introduced a bill to prevent U.S. states from passing AI laws for a decade. The aim is to foster innovation by reducing legal barriers. But at what cost?

In the United States, as in the European Union, regulatory power is fragmented between different levels. In the absence of a federal AI law, it’s the individual states that are stepping in, each with its own priorities. This situation, already well known in the field of data protection, has led to a patchwork of state-level rules that many companies see as unmanageable. But now, Republicans are trying to freeze the situation. In mid-May, the House of Representatives passed a GOP-backed bill that would prohibit U.S. states from passing their own AI laws for ten years.

A Response to Regulatory Fragmentation

State-level AI legislation is beginning to emerge: some laws target deepfakes, others focus on algorithmic bias. California tried to go further with a broader and more ambitious bill, but ultimately failed to pass it, as I previously discussed here. The lack of coordination between these scattered initiatives is creating growing legal uncertainty. For the tech industry, this fragmentation is seen as a threat to U.S. competitiveness.

Yet industry leaders don’t want a federal law either, at least not one modeled on the EU’s AI Act. OpenAI CEO Sam Altman recently went so far as to call such a framework “disastrous” for the American AI ecosystem. In short, tech leaders are rejecting both Europe’s AI Act and the growing patchwork of state laws, arguing that regulation is incompatible with the agility needed for rapid innovation.

Ten Years Without Guardrails?

Supporters of the bill argue that a moratorium would give innovation the breathing room it needs. Ten years of regulatory freedom would allow AI developers to move forward without state-level constraints. It would offer time to better understand the risks before acting. This reflects the broader AI dilemma: how governments must balance innovation incentives with protections against both known and future risks.

And yet, many of these risks are already well documented: threats to privacy, algorithmic discrimination, misinformation, intellectual property violations… While some of these issues can be partially addressed by existing laws—like the GDPR in Europe—there’s no comparable federal framework in the United States.

Several civil society coalitions and expert groups have called on lawmakers to reject the moratorium, arguing that a lack of legal safeguards protects neither individuals nor institutions. The idea that we should “wait until the risks are more concrete” is seen by many as a convenient excuse for inaction. I personally agree that regulation shouldn’t be rushed, especially because bad laws can stifle innovation without offering real protection. However, the U.S. isn’t over-regulated in tech, and that moratorium will likely do more harm than good. We experiment another but related problem in the EU, where we have been piling rules on digital technology, forcing all companies to hire AI and GDPR consultants to be able to follow up, without even being able to fully apply those new laws. But this is a topic for another discussion.

A Strategic Dilemma

Beneath the surface lies a deeper divide, the very heart of the AI dilemma: should we slow down for AI safety, or accelerate to win the AI race? This bill is part of a broader political vision, supported by the Trump camp, that seeks to make the U.S. the global champion of “unleashed AI”, in contrast to China, and to Brussels. It’s a modern spin on the old move fast and break things mantra.

But in pushing regulation aside, the U.S. may also break something vital: public trust. And without trust, there can be no widespread acceptance of AI, no matter how advanced the technology may become.

AI & Data Privacy Compliance
emmanuel-pernot-leplay

In this blog you'll find my articles about AI and privacy regulations from the EU, the US and China.

Interested in Digital Geopolitcs?
Subscribe to my Newsletter: