AI Regulation

The AI regulation dilemma (balancing safety with innovation) is universal, yet each jurisdiction pursues unique solutions. China, the EU, and the U.S. each take distinct approaches to harness AI’s potential while mitigating its risks. At the same time, AI ethics principles have emerged worldwide, revealing both common ground and notable differences across legal and ethical frameworks. This article examines how major jurisdictions regulate AI, the varied directions they take, and the underlying rationales driving their approaches.

I regularly update this article to reflect the rapid changes in the AI legal frameworks considered here (last update: November 2024).

Unlike in the realm of data privacy, the EU may not be the sole standard-setter for AI regulation. China and the U.S. are also competing for leadership, with each jurisdiction beginning its regulatory efforts at around the same time. Although this race to regulate artificial intelligence began before ChatGPT, the rise of generative AI has significantly accelerated global efforts. The divergence and debates in this AI race echo the varied approaches seen in data privacy regulation, as each jurisdiction pursues its own path based on national priorities and values.

Table of Contents

Executive Summary

This article provides a comprehensive comparison of the AI regulatory approaches taken by China, the EU, and the U.S., each shaped by unique national goals, values, and socio-political factors. While the EU’s AI Act seeks to set a global standard by focusing on risk-based safeguards for fundamental rights, it may struggle to keep up with AI’s rapid evolution. The U.S. adopts a flexible, sector-specific approach that promotes innovation, though it risks losing influence due to its lack of a centralized framework. China, meanwhile, balances agile regulation with state control, rapidly adapting its policies to support AI adoption while aligning AI development with government objectives. As global powers grapple with AI’s societal risks and potential, the race to establish international norms is still wide open.

What is the AI Regulation Dilemma?

The AI regulation dilemma centers on balancing safety with innovation. On the one hand, stringent AI laws can stifle innovation; on the other, unchecked innovation can pose risks to society and individuals.

Each jurisdiction may approach this balance differently, guided by broader national goals, cultural norms, and socio-political contexts. This diversity reflects unique national priorities around issues such as privacy, security, economic competitiveness, and technological leadership.

Ultimately, addressing the AI regulation dilemma requires each country to find an optimal path that fosters technological advancement and its benefits while safeguarding against potential risks and aligning with the nation’s values and objectives.

The EU AI Act: A Risk-Based, Comprehensive Approach

With the AI Act, the EU aims to establish a global benchmark for AI regulation, similar to how the GDPR set standards for data privacy. Like the GDPR, the AI Act is a regulation – meaning it will be directly applicable across all EU Member States without requiring national legislation.

The AI Act categorizes AI systems based on their risk to society, with each category triggering different compliance requirements. Certain AI systems, such as social scoring and live facial recognition in public spaces, are outright prohibited. High-risk systems – often used in HR or law enforcement – bear the majority of the regulatory burden, particularly for providers of these AI solutions. These systems will need to undergo Fundamental Rights Impact Assessments, similar to the GDPR’s Data Protection Impact Assessments, reflecting the EU’s emphasis on protecting fundamental rights, including data privacy.

The EU AI Act also includes transparency requirements for AI systems that end-users might mistake for non-AI content, such as chatbots or other AI-generated information. Foundation models, like ChatGPT, are subject to additional regulations, particularly around transparency. Providers must disclose details about these models’ functioning, design, and guidelines for safe integration into organizational solutions. Finally, AI systems outside of these specified categories – such as spam filters and video games – are not in the scope of the EU AI Act.

The EU’s approach with the AI Act is intentionally broad, aiming to cover all sectors much like the GDPR. By being early to establish a comprehensive framework, the EU hopes the Act will serve as a global model, influencing practices worldwide—even though its formal scope is the EU internal market. The Act’s risk-based approach requires companies to identify and mitigate AI-related risks proactively. While the AI Act doesn’t directly grant individual rights as the GDPR does, its ultimate goal is to protect fundamental rights, a core driver that distinguishes the EU’s approach from those of China and the U.S.

The EU has prioritized swift action on AI regulation, a point that European Commissioner Thierry Breton has proudly emphasized. A key ambition of the AI Act is to set a global standard for AI regulation.

The U.S.: Guidelines and Narrow Bills

The U.S. approach to AI regulation is more piecemeal and sector-specific, with a strong emphasis on voluntary compliance – quite different from the EU’s comprehensive strategy. The underlying rationale is that AI technology still needs space to grow and develop before sweeping, binding regulations are put in place. Both government and private organizations have published various AI ethical principles and guidelines, and there are bills proposed at both federal and state levels. However, nothing as broad or centralized as the EU AI Act is currently on the horizon. This lightweight, sectoral approach is similar to the U.S. stance on data privacy regulation.

At the federal level, the U.S. Executive Order on AI has garnered significant attention. Issued by President Biden, it aims to promote ethical, safe, and trustworthy AI development, focusing on American values such as privacy, civil rights, and civil liberties. The order provides guidelines for federal agencies, balancing the need for AI innovation with responsible usage to align with public interest and national values across both government and private sectors. While not legally binding, it broadly directs agencies to procure and operate only responsible AI systems, and it places transparency and accountability requirements on companies developing these systems. The order also includes an express call for Congress to pass bipartisan legislation on AI and privacy. However, questions remain about how the order will be enforced.

Several bills are also being considered, most on a relatively narrow scope. At the federal level, a growing number of proposals arrive, one of the latest would require AI companies to disclose copyrighted training data. At the state level, one in California proposes rules for automated decision-making technology. Florida may strengthen transparency over AI-generated content for political ads.

California’s recent AI bill (SB 1047) offered a new perspective on the U.S. approach, emphasizing stringent safety standards specifically for high-cost AI models. However, Governor Newsom vetoed it in September 2024, citing its potential to stifle innovation and advocating instead for a balanced, risk-based regulation model similar to the EU’s approach. This development underscores the evolving debate within the U.S. on regulating AI without hindering its progress. For further details on California’s AI bill SB 1047, see my comments here.

The risk of the U.S.’s wait-and-see approach is that it may diminish its influence in the global regulatory debate. U.S. laws – and company practices – may ultimately be shaped by the so-called “Brussels Effect,” where companies adapt to comply with the EU AI Act, which will, in effect, set the standard for many U.S. requirements as well.

China’s AI Law: Balancing Development and State control

Against this backdrop, China does support its AI sector through various policy tools (see below). One could say this support is part of a broader Chinese policy encouraging domestic development and Chinese actors. We can see it in China’s broader legal framework on tech, such as China data privacy law or the restrictions on cross-border data transfers in PIPL. Of course, the law is one of the country’s policy tools to support its objective of global AI dominance.

The Chinese government has set a clear objective: to make China the global AI leader by 2030. In this race for AI dominance, China’s primary competitor is the U.S., and the competition extends beyond research output to include export restrictions and espionage concerns, both critical to national security.

To advance its AI sector, China uses a range of policy tools that align with its broader strategy to promote domestic growth and strengthen Chinese tech companies. This approach is reflected in China’s broader legal framework for technology, including its data privacy laws and the cross-border data transfer restrictions in the Personal Information Protection Law (PIPL). Legislation is thus one of China’s strategic levers in its pursuit of global AI leadership.

In the remainder of this article, I’ll illustrate how China’s AI regulation aligns with this overarching goal.

China has positioned itself as an early mover in AI regulation, a contrast to its approach in data privacy, where it largely follows the EU’s model (particularly regarding consumer privacy, rather than citizen privacy). With AI law, however, China is staking out its own approach, stepping in early while the EU and the U.S. are still shaping their frameworks.

China’s Agile Approach to Regulating AI

China has adopted an agile approach to AI regulation, swiftly enacting policies on specific areas as they emerge. Typically, regulations are initially published as drafts, with amendments made following discussions with private sector stakeholders and academia. A notable example of this iterative process is China’s rules on generative AI, which were first proposed in April 2023 and finalized by August 2023.

This agile method offers both advantages and drawbacks. On the plus side, China’s AI legal framework is able to adapt quickly to new technologies, like generative AI. In contrast, Europe’s response to the sudden rise of ChatGPT required the AI Act drafters to hurriedly incorporate new provisions, which nearly derailed negotiations. However, the rapid, evolving nature of China’s AI laws means they are not yet fully stabilized, creating gray areas for companies navigating compliance. Additionally, the lack of clarity on enforcement mechanisms adds a layer of uncertainty for businesses operating under these emerging regulations.

China AI Laws Favoring AI Adoption

Feedback on the April draft led to a softening of several requirements, creating a more favorable regulatory environment for AI companies. For instance, what was initially a strict obligation to ensure outputs were accurate was revised to a “best effort” standard. Additional AI regulations address algorithmic recommendations and deep synthesis.

High-level government directives also shape a legal landscape that supports AI adoption. In November 2023, a Beijing court ruled that AI-generated content could be eligible for copyright protection – a stark contrast to the U.S., where similar claims have been dismissed, and the EU, where the AI Act’s copyright provisions may present challenges for AI developers. Although this ruling came from a lower-level court, it signals a supportive stance toward AI in China. However, this approach may also raise questions, such as whether it could stifle human creativity, which AI still relies on to some extent. Overall, China’s regulatory approach underscores its commitment to promoting AI adoption and development.

Determination to Maintain State Control

A defining feature of China’s AI regulation is the emphasis on maintaining state control, which can sometimes clash with the goal of fostering rapid AI adoption. This need for control was recently reaffirmed by Xi Jinping, underscoring the importance of ensuring AI development aligns with the “correct political direction.” While certain regulatory requirements have been softened to support AI growth, these political obligations remain firmly in place, particularly for public-facing AI systems. This framework allows room for censorship and selective enforcement, distinguishing China’s approach from the EU and U.S., where respect for individual rights and democratic values is embedded in both emerging laws and ethical guidelines for AI.

State control over AI development in China also extends beyond legislation. Through significant public investment, including state-backed venture capital funds, the government influences AI companies’ development priorities, guiding them toward alignment with national objectives.

Who will solve the Global AI Regulation Dilemma?

In this article, we explored the three distinct approaches that China, the EU, and the U.S. have taken toward AI law. Yet, it’s too early to determine who, if anyone, will ultimately set the global standard. If we look to privacy law for comparison, the EU has led the way, with GDPR now widely regarded as the global model for data privacy legislation. So far, however, no equivalent exists for AI regulation.

The EU aims for the AI Act to be an overarching framework, classifying and regulating AI systems according to their societal risks. At its core is the protection of fundamental rights. However, the challenge for the EU lies in its regulatory agility: AI technology evolves rapidly, as demonstrated by generative AI like ChatGPT, which emerged after the AI Act was proposed in 2021 and required substantial amendments to the draft.

The U.S., by contrast, favors guidelines and sector-specific bills to promote AI safety while leaving room for innovation. With no comprehensive federal privacy law yet in place, many aspects of AI regulation remain unaddressed. This wait-and-see approach allows flexibility but also risks sidelining the U.S. in the global debate, lacking a cohesive framework to offer as an international model.

China’s approach may be the most agile, adopting targeted policies to address specific AI issues and enacting laws quickly. This responsiveness enables China to react to new AI developments and could position it as a model for international AI laws. However, the potential lack of stability in its regulatory framework may create uncertainty for companies, leading to gray areas that could hinder development and investment.

These jurisdictions have each taken a unique stance in navigating the AI regulation dilemma, but in 2024, the landscape remains fluid and evolving. In future analyses, I will delve deeper into their rationales and the potential paths forward. For now, it’s clear that establishing global standards in AI regulation may prove far more challenging than it was for data privacy.

AI & Data Privacy Compliance
emmanuel-pernot-leplay

In this blog you'll find my articles about AI and privacy regulations from the EU, the US and China.