The AI regulation dilemma is the same for all jurisdictions, but the solutions are not. China, the EU and the U.S. approach differently AI law to harness the power of AI and mitigate its issues. Meanwhile, AI ethics principles have already flourished all around the world. In all of those legal and ethical texts, there are convergence and divergence. This article outlines the different approaches to show how major jurisdictions regulate AI and what are the rationales behind it.
I regularly update this article to reflect the rapid changes in the AI legal frameworks studied here.
Unlike with data privacy, the EU may not be the one that will set the global standards for AI regulation. China and the US also compete to take the leadership role, and those three jurisdictions started their work at nearly the same time. This race to regulate artificial intelligence started before ChatGPT, but generative AI has drastically accelerated the works around the world. The observable divergence and debate that stem from this race isn’t far from the different directions in data privacy.
What is the AI Regulation Dilemma?
The AI regulation dilemma is the balance between state control and innovation. AI laws may impose requirements that stifle innovation, whereas uncontrolled innovation may cause adverse consequences for society. Therefore, each jurisdiction must choose an approach that reflects its broader national priorities on AI.
This balance is pivotal because overly stringent AI law could hinder the creative and developmental potential of AI, while insufficient regulation might lead to outcomes that could be detrimental to societal welfare, ethical standards, and individual rights. Different jurisdictions may adopt varying approaches based on their broader national goals, cultural norms, and socio-political contexts. This aspect highlights the diversity in AI regulatory frameworks across different countries and regions, reflecting their unique stances on issues like privacy, security, economic competitiveness, and technological leadership.
Therefore, resolving the dilemma is about finding an optimal path that nurtures technological innovation and its benefits, while safeguarding against potential risks and aligning with the nation’s broader objectives and values.
The EU AI Act: a Risk-Based Cover-all Approach
The EU wants to set the global standard for AI Regulation with the AI Act, like it did with GDPR for data privacy, that will cover nearly all sectors. To this end, the AI Act will be a Regulation, i.e. an Act that is directly applicable within the Union, like GDPR (no need for each Member State to pass a specific law). Although the EU AI Act does not exist yet and several elements remain unclear, an agreement has been reached and the text is expected to be passed early 2024. It classifies AI systems in different categories based on their risk for society. This classification will trigger different compliance requirements. The Act prohibits certain AI systems, such as social scoring live facial recognition in public places. Then come high-risk systems, bearing the majority of the regulatory burden (more specifically, on the providers of such AI solutions). HR or law enforcement often use those. They’ll need to go through Fundamental Rights Impact Assessments, which are expected to be akin to the Data Protection Impact Assessments under GDPR (after all, data protection is one of the fundamental rights).
The EU AI Act poses transparency obligations for systems that an end-user could confuse with non-AI data or information (such as chat bots and other AI-generated content). Foundation models such as ChatGPT will also be regulated, in particular by requiring transparency and information about their functioning, design, and how organizations are supposed to use/embed them in their own solutions. Finally, the AI systems that don’t fall in those categories are not in the scope of the EU AI Act (for example, spam filters, video games…)
The rationale behind this approach is that the EU wants to be broad, for all sectors, like with GDPR. It wants to be early to be used as a model legal framework globally, and therefore shape the global practices (even though the Act’s scope is the EU internal market). It features a risk-based approach, meaning companies need to identify those risk themselves and mitigate them appropriately. While the Artificial Intelligence Act doesn’t give rights directly to the individuals like GDPR did, it’s ultimate goal is to ensure the protection of fundamental rights. This is what really drives the EU approach, and sets it apart from China and the U.S. in the debate.
The EU has chosen to move very fast on AI regulation, which Thierry Breton (European Commissioner for Internal Market) proudly underlined many times. One of the EU stated ambitions for the AI Act is to make it a global standard.
The U.S.: Guidelines and Narrow Bills
The U.S. takes an approach to AI regulation that is much more piecemeal, narrow, an relying on voluntary compliance than the EU’s. The rationale being that this technology still needs room to grow and develop before broad and binding regulation becomes necessary. The government and private organizations publish their AI ethical principles and guidelines, there are bills proposed at federal and state levels, but nothing similar to the EU AI Act is underway at the moment. This lightweight, sectorial approach is akin to the U.S. stance on the data privacy regulation debate.
At the Federal level, the U.S. Executive Order on AI has drawn the most attention so far. Issued by President Biden, it focuses on promoting ethical, safe, and trustworthy AI development and use. It provides guidelines for federal agencies and emphasizes American values like privacy, civil rights, and liberties. The order seeks to balance innovation in AI with responsible usage, ensuring the technology aligns with public interest and national values, both in government and the private sector. This initiative aims to maintain U.S. leadership in AI while addressing concerns around AI’s impact on society. It’s not binding law, but, broadly, it requires agencies to purchase and operate only responsible AI systems. The companies building these AI systems are subjects to certain requirements such as transparency. It is also an express call from Biden for Congress to pass bipartisan legislation on both AI and privacy. It’ still uncertain how this order will be enforced.
Several bills are also being considered, most on a relatively narrow scope. At the federal level, a growing number of proposals arrive, one of the latest would require AI companies to disclose copyrighted training data. At the state level, one in California proposes rules for automated decision-making technology. Florida may strengthen transparency over AI-generated content for political ads.
The risk for the U.S. with their wait-and-see approach, is that they won’t weight much in the debate. The shaping of its laws (and companies’ practice) will actually happen through the so-called Brussels Effect: companies will build their practice to be compliance with the EU AI Act, which will de facto cover most U.S. requirements.
China’s AI Law: Dilemma Between Development & State control
China’s AI regulation is not as strict as one may think. China does want to maintain state control more than in the EU or the US, but not at the expanse of developing a technology they want to dominate in the world by 2030. Let’s explore this conundrum and explain how AI regulation in China could be a model for other countries. The potential for China’s influence over global AI regulation cannot be dismissed.
The Chinese government has made clear the goal for China to become the global AI leader by 2030. In this race for AI domination, China is mainly battling against the US. The fight happens not only on research output, but also on export restrictions and espionage, to protect national security.
Against this backdrop, China does support its AI sector through various policy tools (see below). One could say this support is part of a broader Chinese policy encouraging domestic development and Chinese actors. We can see it in China’s broader legal framework on tech, such as China data privacy law or the restrictions on cross-border data transfers in PIPL. Of course, the law is one of the country’s policy tools to support its objective of global AI dominance.
To that end, I’ll show in the rest of the article how China’s AI regulation aims to support this goal.
China is one of the first mover on AI regulation, as opposed to data privacy, where it now largely follows the EU model (as far as the consumer’s privacy is concerned, rather than the citizen’s). But for AI law, China can propose its own approach, early enough so that the EU or the U.S. have not taken the space yet.
China’s Agile Method to Regulate AI
The Chinese method, so far, has been to rapidly enact texts on specific areas and issues it wants to step in. Those texts may be first published as drafts, and then amended following discussions with private organizations and academia. A prime example of this iterative approach are China’s rules on generative AI. They were first proposed in April 2023, then finalized in August 2023.
This agile method has both pros and cons. The benefits are that the Chinese legal framework on artificial intelligence is much better at quickly responding to new tech such as generative AI. Whereas, in Europe, ChatGPT forced lawmakers to shoehorn specific provisions in the draft AI Act, that almost derailed negotiations. On the other hand, while Chinese laws on AI aren’t yet stabilized, companies risk navigating gray areas, especially as the way those rules are enforced remains murky.
China AI Laws that Favor AI Adoption
The comments on April’s draft led to a significant watering down of its requirements, making it much friendlier for AI companies. For example, what was initially an obligation to ensure that outputs were true, became only a best effort requirement. Other rules on AI are those on algorithmic recommendation and deep synthesis.
Finally, high-level directives from the government have other legal impacts that favour the development of AI in China. For example, in November 2023 a Beijing court ruled that AI generated content could benefit from copyright protection. This is stark contrast with the US approach (where similar claims have been dismissed), and the EU AI Act will also contain copyrights provisions that will cause problems to AI companies. Although the decision comes from a lower-level court, this legal stance obviously favors the adoption of AI in the country. Perhaps, also, will it create other problems, such as hindering the human creativity that AI still relies on. But it shows that China’s AI regulation generally favors AI adoption and development.
Determination to Maintain State Control
A salient particularity of China’s AI law is the need to maintain state control, which may clash with the desire to foster AI adoption in the country. This need was recently underlined by Xi Jinping himself. A common point of those texts is indeed to mark the need to follow the “correct political direction”. Despite the above-mentioned watering down and softening of requirements, these political obligations remain, especially for public facing AI systems. This is where censorship can resurface, and targeted enforcement can also be at play. In the EU and the U.S., the need to respect individual’s rights and democracy is present in nascent laws and AI ethics principles.
It’s important to note that state control doesn’t only happen through laws on AI. China’s public spending into AI, especially through state-backed venture capital funds, is another way to keep AI companies in check and incentivize the development of AI systems towards alignment with government priorities.
Who will solve the Global AI Regulation Dilemma?
In this article, I outlined the three different approaches taken by China, the EU and the U.S. on AI law. But it’s impossible to say today who will set the international standards for AI laws. Taking a look at privacy laws for comparison and analogy, the EU leads the race and has decisively set the global rules. GDPR is now largely a model for privacy legislation around the world. To date, there is no similar model for AI laws.
- The EU wants the AI Act to be the umbrella law under which AI systems are classified and regulated accordingly. AI safety for the protection of fundamental rights is at the heart of its approach. The risk for the EU is to lack the agility in regulating AI, which is itself changing very fast. For example, the AI Act, proposed in 2021, was not fit for gen-AI (ChatGPT) and had to be amended.
- The U.S. relies on guidelines to encourage AI safety and still discusses bills, most of which have a narrow scope. It also lacks a broad federal privacy law which would tackle many AI-regulated issues. It is, to date, mostly following a wait-and-see approach. The risk for the U.S. is to not influence the global debate, for lack of a clear model to propose.
- China proposes what is maybe the most interesting approach to regulate AI so far. It develops a set of policy tools targeting precise AI issues, that China enacts and modify quickly. This could give China the agility to rapidly propose solutions to new AI issues and become a model for AI laws globally. The risk for China is, therefore, to lack stability in its AI legal framework. Which could let companies in gray areas and to hinder developments and investments.
These three jurisdictions have chosen a direction on how to solve the AI regulation dilemma. And in 2024, it’s still a fast-moving topic that is just emerging. I will deeper study their rationale and where these can lead us in my future writings. But I can say, today, that it will be harder for the EU to set global standards on AI regulation than it was for data privacy.