The AI regulation dilemma is the same for all jurisdictions, but the solutions are not. China, the EU and the U.S. all launched legal initiatives to harness the power of AI and mitigate its issues. Meanwhile, AI ethics principles have already flourished all around the world. In all of those legal and ethical texts, there are convergence and divergence. This article outlines the different approaches to show how major jurisdictions regulate AI and what are the rationales behind it.

I regularly update this article to reflect the rapid changes in the AI legal frameworks studied here.

Unlike with data privacy, the EU may not be the one that will set the global standards for AI regulation. China and the US also compete to take the leadership role, and those three jurisdictions started their work at nearly the same time. This race to regulate artificial intelligence started before ChatGPT, but generative AI has drastically accelerated the works around the world. The observable divergence and debate that stem from this race isn’t far from the different directions in data privacy.

What is the AI Regulation Dilemma?

The AI regulation dilemma is the balance between state control and innovation. AI laws may impose requirements that stifle innovation, whereas uncontrolled innovation may cause adverse consequences for society. Therefore, each jurisdiction must choose an approach that reflects its broader national priorities on AI.

This balance is pivotal because overly stringent regulations could hinder the creative and developmental potential of AI, while insufficient regulation might lead to outcomes that could be detrimental to societal welfare, ethical standards, and individual rights. Different jurisdictions may adopt varying approaches based on their broader national goals, cultural norms, and socio-political contexts. This aspect highlights the diversity in AI regulatory frameworks across different countries and regions, reflecting their unique stances on issues like privacy, security, economic competitiveness, and technological leadership.

Therefore, resolving the dilemma is about finding an optimal path that nurtures technological innovation and its benefits, while safeguarding against potential risks and aligning with the nation’s broader objectives and values.

The EU AI Act: a Risk-Based Cover-all Approach

The EU wants to set the global standard for AI Regulation with the AI Act, like it did with GDPR for data privacy, that will cover nearly all sectors. To this end, the AI Act will be a Regulation, i.e. an Act that is directly applicable within the Union, like GDPR (no need for each Member State to pass a specific law). Although the EU AI Act does not exist yet and several elements remain unclear, an agreement has been reached and the text is expected to be passed early 2024. It classifies AI systems in different categories based on their risk for society. This classification will trigger different compliance requirements. The Act prohibits certain AI systems, such as social scoring live facial recognition in public places. Then come high-risk systems, bearing the majority of the regulatory burden (more specifically, on the providers of such AI solutions). HR or law enforcement often use those. They’ll need to go through Fundamental Rights Impact Assessments, which are expected to be akin to the Data Protection Impact Assessments under GDPR (after all, data protection is one of the fundamental rights).

The EU AI Act poses transparency obligations for systems that an end-user could confuse with non-AI data or information (such as chat bots and other AI-generated content). Foundation models such as ChatGPT will also be regulated, in particular by requiring transparency and information about their functioning, design, and how organizations are supposed to use/embed them in their own solutions. Finally, the AI systems that don’t fall in those categories are not in the scope of the EU AI Act (for example, spam filters, video games…)

The rationale behind this approach is that the EU wants to be broad, for all sectors, like with GDPR. It wants to be early to be used as a model legal framework globally, and therefore shape the global practices (even though the Act’s scope is the EU internal market). It features a risk-based approach, meaning companies need to identify those risk themselves and mitigate them appropriately. While the Artificial Intelligence Act doesn’t give rights directly to the individuals like GDPR did, it’s ultimate goal is to ensure the protection of fundamental rights. This is what really drives the EU approach, and sets it apart from China and the U.S. in the debate.

The EU has chosen to move very fast on AI regulation, which Thierry Breton (European Commissioner for Internal Market) proudly underlined many times. One of the EU stated ambitions for the AI Act is to make it a global standard.

The U.S.: Guidelines and Narrow Bills

The U.S. takes an approach to AI regulation that is much more piecemeal, narrow, an relying on voluntary compliance than the EU’s. The rationale being that this technology still needs room to grow and develop before broad and binding regulation becomes necessary. The government and private organizations publish their AI ethical principles and guidelines, there are bills proposed at federal and state levels, but nothing similar to the EU AI Act is underway at the moment. This lightweight, sectorial approach is akin to the U.S. stance on the data privacy regulation debate.

At the Federal level, the U.S. Executive Order on AI has drawn the most attention so far. Issued by President Biden, it focuses on promoting ethical, safe, and trustworthy AI development and use. It provides guidelines for federal agencies and emphasizes American values like privacy, civil rights, and liberties. The order seeks to balance innovation in AI with responsible usage, ensuring the technology aligns with public interest and national values, both in government and the private sector. This initiative aims to maintain U.S. leadership in AI while addressing concerns around AI’s impact on society. It’s not binding law, but, broadly, it requires agencies to purchase and operate only responsible AI systems. The companies building these AI systems are subjects to certain requirements such as transparency. It is also an express call from Biden for Congress to pass bipartisan legislation on both AI and privacy. It’ still uncertain how this order will be enforced.

Several bills  are also being considered, most on a relatively narrow scope. At the federal level, a growing number of proposals arrive, one of the latest would require AI companies to disclose copyrighted training data. At the state level, one in California proposes rules for automated decision-making technology. Florida may strengthen transparency over AI-generated content for political ads.

The risk for the U.S. with their wait-and-see approach, is that they won’t weight much in the debate. The shaping of its laws (and companies’ practice) will actually happen through the so-called Brussels Effect: companies will build their practice to be compliance with the EU AI Act, which will de facto cover most U.S. requirements.

China: Building AI Regulation Fast and Iteratively

China is among the first to regulate AI, as part of a comprehensive long-term plan to develop AI. Whereas China data privacy law is largely inspired by EU’s principles and concepts, it’s taking its own direction on AI laws and it’s moving fast. China is indeed building a set of rules targeted at different AI-related issues in a very iterative manner, which is the opposite as the EU AI Act. A draft for comment is issued, then a final version, which is quickly amended if need be. For example, rules on algorithmic recommendations, deepfakes and generative AI have been passed in 2021, 2022 and 2023, after a first draft and a few months of discussions before the final text. AI regulation in China focuses on encouraging innovation while maintaining state control (with requirements such as adhering to the correct political direction). As opposed to what one may think, the way China solves the AI regulation dilemma so far is not by tipping the balance strongly towards state control.

In 2023, China has made significant strides in AI regulation, particularly focusing on generative AI. It introduced draft measures in April for managing generative AI services. The finalized generative AI measures came into effect on August 15th, 2023. They were, however, significantly less stringent than the earlier draft, with some of the more rigorous requirements being removed. This includes the removal of obligations to rectify illegal content within a three-month period and to ensure all training data and outputs are “truthful and accurate”. These rules only apply to public-facing generative AI systems.

China’s fundamental approach to AI governance remains focused on mitigating harm to individuals and maintaining social stability and state control, while also aiming for global leadership in AI and influence the debate on AI regulation. This is evident in their efforts to develop international standards that might offer them a competitive edge.

Who will solve the Global AI Regulation Dilemma?

In this article, I outlined the three different approaches taken by China, the EU and the U.S. on the AI regulation dilemma. But it’s impossible to say today who will set the international standards for AI laws. Taking a look at privacy laws for comparison and analogy, the EU leads the race and has decisively set the global rules. GDPR is now largely a model for privacy legislation around the world. To date, there is no similar model for AI laws.

  • The EU wants the AI Act to be the umbrella law under which AI systems are classified and regulated accordingly. AI safety for the protection of fundamental rights is at the heart of its approach. The risk for the EU is to lack the agility in regulating AI, which is itself changing very fast. For example, the AI Act, proposed in 2021, was not fit for gen-AI (ChatGPT) and had to be amended.
  • The U.S. relies on guidelines to encourage AI safety and still discusses bills, most of which have a narrow scope. It also lacks a broad federal privacy law which would tackle many AI-regulated issues. It is, to date, mostly following a wait-and-see approach. The risk for the U.S. is to not influence the global debate, for lack of a clear model to propose.
  • China proposes what is maybe the most interesting approach to regulate AI so far. It develops a set of policy tools targeting precise AI issues, that China enacts and modify quickly. This could give China the agility to rapidly propose solutions to new AI issues and become a model for AI laws globally. The risk for China is, therefore, to lack stability in its AI legal framework. Which could let companies in gray areas and to hinder developments and investments.

These three jurisdictions have chosen a direction on how to solve the AI regulation dilemma. And in 2024, it’s still a fast-moving topic that is just emerging. I will deeper study their rationale and where these can lead us in my future writings. But I can say, today, that it will be harder for the EU to set global standards on AI regulation than it was for data privacy.

Emmanuel Pernot-Leplay

Stay in the loop

Subscribe to our free newsletter.