Table of contents
Share Post

China’s AI regulation is not as strict as one may think. China does want to maintain state control more than in the EU or the US, but not at the expanse of developing a technology they want to dominate in the world by 2030. In this article, I explore this conundrum and explain how AI regulation in China could be a model for other countries. The potential for China’s influence over global AI regulation certainly shouldn’t be dismissed.

China Wants to be the Global AI leader, With the Help of Its Laws

The Chinese government has made clear the goal for China to become the global AI leader by 2030. In this race for AI domination, China is mainly battling against the US. The fight happens not only on research output, but also on export restrictions and espionage, to protect national security.

Against this backdrop, China does support its AI sector through various policy tools (see below). One could say this support is part of a broader Chinese policy encouraging domestic development and Chinese actors. We can see it in China’s broader legal framework on tech, such as China data privacy law or the restrictions on cross-border data transfers in PIPL. Of course, the law is one of the country’s policy tools to support its objective of global AI dominance.

To that end, I’ll show in the rest of the article how China’s AI regulation aims to support this goal.

China’s AI Regulation: Dilemma Between Development & State control

China is one of the first mover on AI regulation, as opposed to data privacy, where it now largely follows the EU model (as far as the consumer’s privacy is concerned, rather than the citizen’s). But for AI law, China can propose its own approach, early enough so that the EU or the U.S. have not taken the space yet.

China’s Agile Method to Regulate AI

The Chinese method, so far, has been to rapidly enact texts on specific areas and issues it wants to step in. Those texts may be first published as drafts, and then amended following discussions with private organizations and academia. A prime example of this iterative approach are China’s rules on generative AI. They were first proposed in April 2023, then finalized in August 2023.

This agile method has both pros and cons. The benefits are that the Chinese legal framework on artificial intelligence is much better at quickly responding to new tech such as generative AI. Whereas, in Europe, ChatGPT forced lawmakers to shoehorn specific provisions in the draft AI Act, that almost derailed negotiations. On the other hand, while Chinese laws on AI aren’t yet stabilized, companies risk navigating gray areas, especially as the way those rules are enforced remains murky.

Laws that Favor AI Adoption in China

The comments on April’s draft led to a significant watering down of its requirements, making it much friendlier for AI companies. For example, what was initially an obligation to ensure that outputs were true, became only a best effort requirement. Other rules on AI are those on algorithmic recommendation and deep synthesis.

Finally, high-level directives from the government have other legal impacts that favour the development of AI in China. For example, in November 2023 a Beijing court ruled that AI generated content could benefit from copyright protection. This is stark contrast with the US approach (where similar claims have been dismissed), and the EU AI Act will also contain copyrights provisions that will cause problems to AI companies. Although the decision comes from a lower-level court, this legal stance obviously favors the adoption of AI in the country. Perhaps, also, will it create other problems, such as hindering the human creativity that AI still relies on. But it shows that China’s AI regulation generally favors AI adoption and development.

Determination to Maintain State Control

A salient particularity of China’s legal framework on AI is the need to maintain state control, which may clash with the desire to foster AI adoption in the country. This need was recently underlined by Xi Jinping himself. A common point of those texts is indeed to mark the need to follow the “correct political direction”. Despite the above-mentioned watering down and softening of requirements, these political obligations remain, especially for public facing AI systems. This is where censorship can resurface, and targeted enforcement can also be at play. In the EU and the U.S., the need to respect individual’s rights and democracy is present in nascent laws and AI ethics principles.

It’s important to note that state control doesn’t only happen through laws on AI. China’s public spending into AI, especially through state-backed venture capital funds, is another way to keep AI companies in check and incentivize the development of AI systems towards alignment with government priorities.

Emmanuel Pernot-Leplay

Stay in the loop

Subscribe to our free newsletter.