01 · Three models at a glance

Regulation, dominance or state control?

Few topics shape geopolitics in 2025 more than artificial intelligence. But what "correct" AI policy actually means is something the three largest economic powers fundamentally disagree on. The EU has passed the world's first comprehensive AI law — and now fears being seen as hostile to innovation. The US under Trump immediately scrapped Biden's strictest AI executive order on taking office and is betting on maximum deregulation to beat China. China also regulates — but differently: the focus there is less on fundamental rights and more on what AI is allowed to say and whom it serves.

🇪🇺
European Union
"Trustworthy AI"
  • Risk-based tiered model
  • Prohibited AI practices since Feb. 2025
  • Fines up to 7% of annual turnover
  • Focus: fundamental rights & transparency
  • AI Office in Brussels as overseer
🇺🇸
United States
"Minimally Burdensome"
  • No federal law; executive orders only
  • Trump approach: deregulation, dominance
  • State laws actively challenged
  • Focus: economy & national security
  • $500bn Stargate investment plan
🇨🇳
China
"Controlled Innovation"
  • World's first generative AI rules (2023)
  • Mandatory: registration with authorities
  • Content must conform to "socialist values"
  • Focus: state control & leadership claim
  • Goal: global AI leadership by 2030
02 · The EU AI Act

The world's first AI law — and its teeth

On 1 August 2024, the EU AI ActAI ActThe European Union's AI law — officially "Regulation (EU) 2024/1689". The world's first comprehensive law regulating artificial intelligence. (Regulation (EU) 2024/1689) entered into force. The world's first comprehensive AI law is no experiment — it applies directly in all 27 member states, without national implementing legislation. The approach is risk-based: the higher the potential harm a system could cause, the stricter the requirements.

The tiered model: Unacceptable risk = prohibited. High risk = strict obligations. Low risk = transparency requirements. No risk = free. The principle sounds logical — but classification is complex and costs companies significant resources.

What has applied since February 2025

Since 2 February 2025, certain AI practices have been completely banned: social scoringSocial scoringA system in which the state rates citizens' behaviour with points — and restricts or grants rights depending on the score. Partially a reality in China. by public authorities, emotion recognition in the workplace or educational institutions, remote biometric surveillanceRemote biometric surveillanceAutomated facial recognition via camera in public spaces — e.g. at train stations or squares. Identifies individuals by their face without their knowledge. in public spaces (with narrow exceptions), and AI systems that manipulate behaviour through subliminal influence. At the same time, AI literacy obligations have applied since then — companies must ensure their employees can meaningfully use and understand AI systems.

Since August 2025: GPAI rules and penalties

Since 2 August 2025, the rules for so-called General-Purpose AI (GPAI)GPAI"General-purpose AI" — large foundation models not built for a single purpose, but capable of handling many tasks. Examples: ChatGPT, Claude, Gemini. models — i.e. large foundation models such as ChatGPT, Gemini or Claude — have taken effect. Providers must meet transparency and copyright obligations. Models with "systemic risks" (from approximately 10²⁵ floating-point operations for training) are subject to even stricter requirements: risk assessments, adversarial testingAdversarial testingTargeted stress tests in which security researchers try to trick an AI or provoke misbehaviour — to find weaknesses before others do., and incident reporting obligations.

At the same time, the full penalty regime has been in force since August 2025: up to €35 million or 7% of global annual turnover for violations of prohibited practices. €15 million or 3% for other violations. The Commission's AI Office in Brussels coordinates enforcement at EU level, while national authorities carry out primary supervision.

Aug. 2024
AI Act enters into force — published in the EU Official Journal, directly applicable EU law
Feb. 2025
Prohibitions take effect — social scoring, mass biometric surveillance and manipulative AI banned
Aug. 2025
GPAI rules & penalties — large models must comply with transparency obligations; penalty regime active
Aug. 2026
Full application (planned) — high-risk AI systems must meet all requirements (education, employment, credit, policing…). ⚠️ The European Commission's Digital Omnibus proposal would push this deadline to December 2027 — the proposal is being debated in 2026.

The innovation dilemma

The AI Act has been criticised for disadvantaging European AI companies relative to US and Chinese competitors. France, Germany and other countries lobbied for weaker rules for GPAI models during the legislative process — fearing that Mistral and other European AI firms would be stifled. Even after the law was passed, the Commission is trying to simplify implementation and extend deadlines through its "Digital Omnibus" package. The fact remains: in 2025, US companies published 40 significant AI models; European institutions managed just three.

ℹ️
The EU is betting on regulated AI as a competitive advantage: an "AI made in EU" label is intended to build trust — similar to how the GDPRGDPRGeneral Data Protection Regulation — the EU data protection law of 2018. Governs how companies may collect and process personal data. Regarded worldwide as a benchmark. became a global data protection standard. Whether this succeeds depends on whether EU AI systems also remain technically competitive.
03 · The US under Trump

"Minimally Burdensome" — deregulation as state doctrine

No country has reversed its AI course as drastically as the United States. Joe Biden had signed one of the most comprehensive AI executive orders in history in October 2023: safety standards, anti-discrimination measures, requirements for government agencies. Donald Trump cancelled it on the first day of his second term, 20 January 2025 — the same day DeepSeek R1 was released.

Trump's philosophy: AI is a competitive instrument against China, not a subject for regulation. The phrase "minimally burdensome" runs through every AI document from this administration like a recurring theme. Regulation is treated as a brake, not a safeguard.

The Stargate project and $500 billion

Shortly after taking office, Trump announced the Stargate project jointly with OpenAI, SoftBank and Oracle: an AI infrastructure initiative worth $500 billion — the largest such investment in US history. The first step: $100 billion immediately, to build data centres across the US. The logic: whoever owns the infrastructure sets the AI rules.

Executive Order December 2025: putting states under pressure

On 11 December 2025, Trump signed a further executive orderExecutive OrderA directive from the US President with immediate effect — without congressional approval. It can instruct agencies, but has limits: it cannot repeal existing laws. that is radical in its scope. Its aim: to override state AI laws that conflict with the federal interest. The measures in detail:

A federal law that could formally override state AI rules failed in Congress in 2025 — the Senate voted 99:1 against a 10-year moratorium on new state AI laws. The executive order tries to achieve the same thing through the back door. Whether this holds up legally is unclear: executive orders cannot directly repeal state laws.

⚠️
The paradox: While Trump fights state AI protection laws, the number of state AI laws is exploding — Colorado, California, New York, Utah. Companies must navigate 50 different regulatory frameworks as long as no federal law exists. The very "patchwork" that the administration criticises is perpetuated by its own blocking of federal legislation.

No AI without a chip war

US AI policy is inseparably linked to export controls. The Biden administration had progressively tightened restrictions on exporting Nvidia high-performance chips to China. Trump tightened this policy further in 2025. The result: China officially cannot access the world's best AI accelerators. This pressure has — paradoxically — fuelled Chinese innovation. DeepSeek was developed without access to the latest H100 chips and proved that algorithmic efficiency can partially compensate for hardware disadvantages.

74%
US share of global AI supercomputer capacity (2025)
14%
China's share despite 230 AI data clusters worldwide
4.8%
EU share — the smallest compute share despite the AI Act
04 · China: regulation as control

The world's first generative AI law — and what it really wants

On 15 August 2023, China's "Interim Measures for the Management of Generative Artificial Intelligence Services" entered into force — the world's first binding law specifically for generative AI. China was faster than the EU. But the motivation is different: whereas the AI Act tries to protect people from AI, China's regulation is designed to ensure that AI serves the state.

Core obligations for Chinese AI providers: registration of all large language models (LLMs)LLMLarge Language Model — an AI system trained on vast amounts of text that can understand and generate human language. Examples: ChatGPT, Claude, DeepSeek. with the Cyberspace Administration of China (CAC). Content must not violate "socialist core values", spread fake news or "endanger national unity". Algorithms must be disclosed.

The "trio" regulation

China does not regulate AI with a single law, but with a system of three interlocking regulations: the Algorithmic Recommendation Measures (Jan. 2022), the Deep Synthesis Measures (deepfakes, Jan. 2023) and the Generative AI Measures (Aug. 2023). Together they cover the entire AI value chain — from recommendations through synthetic content to large language models.

In March 2025, a requirement to label AI-generated content followed (effective September 2025): texts, audio, images, videos and virtual scenarios must be explicitly or implicitly marked as AI-generated. At the same time, the National People's Congress is planning a comprehensive national AI law — concrete drafts were announced in 2025.

State strategy: leadership by 2030

Alongside regulation, China is pursuing the most aggressive state AI strategy of the three powers. As early as 2017, the State Council adopted the "Next Generation AI Development Plan" with the goal of making China the global AI superpower by 2030. The budget: 1 trillion yuan (~$140 billion) in technology investment by 2030.

Local governments compete with each other for AI talent and companies. Cities such as Beijing, Shanghai and Hangzhou award funding contracts, tax concessions and the coveted "National High-Tech Enterprise" status — most recently to DeepSeek, following its surprise success in January 2025.

DeepSeek: the Sputnik moment

On 20 January 2025, the day of Trump's second inauguration, the Chinese startup DeepSeek released its model R1 — an open-sourceOpen sourceSoftware whose source code is publicly accessible. Anyone can use, inspect and develop it further — as opposed to closed systems like GPT-4. model that matches GPT-4 on several benchmarks. The reaction was immediate: Nvidia's share price fell 17%, with market capitalisation dropping by over $600 billion in a single day. Trump called it a "wake-up call". Media outlets compared it to the Sputnik shock of 1957.

🔍
DeepSeek's significance lies not only in its technical performance. The model was developed with significantly less compute than comparable US models — direct evidence that US chip export controls slow things down, but do not stop them. Algorithmic innovation can partially compensate for hardware restrictions.

What AI in China is not allowed to do

Chinese users experience AI in a fundamentally different way. Questions about Tiananmen 1989, Taiwan as an independent state, Xinjiang internment camps or political leadership are consistently blocked or deflected by Chinese models. DeepSeek, though technically impressive, is just as censored in these areas as state platforms. The tension: technical openness and political control coexist — what is intended for international use is deployedDeployedFrom the English "deploy" — means: a piece of software or an AI model is made available on servers and accessible to users. outside China, where different rules apply.

05 · Direct comparison

Three systems, one table

Criterion 🇪🇺 EU 🇺🇸 US 🇨🇳 China
Legal framework Comprehensive law (AI Act), directly applicable No federal law; executive orders + state laws Several specific regulations; national law planned
Core principle Risk-based — more risk = more obligations Innovation-based — minimal restrictions Security-based — state security & content control
Prohibitions Social scoring, mass biometric surveillance, manipulation No explicit AI prohibitions at federal level Content against "socialist values", political subversion
Penalties Up to €35M / 7% of turnover No AI-specific federal penalties Shutdown, licence revocation, criminal referral possible
Oversight EU AI Office + national authorities FTC, EEOCEEOCEqual Employment Opportunity Commission — the US anti-discrimination agency for the workplace. Examines whether AI unfairly filters job applications, for example., FCC depending on context; DOJDOJDepartment of Justice — the US federal law enforcement agency. Prosecutes violations of federal law and can bring AI-related cases. AI Task Force CACCACCyberspace Administration of China — China's central internet and AI supervisory authority. Controls which AI models are approved and what content is permitted. (Cyberspace Administration) + 6 further ministries
Investment strategy AI Factories, €1.3bn funding programme $500bn Stargate (private/public), CHIPS Act ¥1tn state-planned by 2030, provincial competition
Fundamental rights focus Strong — discrimination, dignity, privacy Moderate — free speech, anti-bias in federal agencies Weak — fundamental rights subordinated
Model market Mainly international providers; Mistral (FR) as EU representative OpenAI, Anthropic, Google, Meta dominate globally Baidu, Alibaba, DeepSeek, ByteDance — state-supported
Openness Open to international providers under AI Act conditions Export controls against China; otherwise open Registration required; foreign services effectively blocked
06 · The power question

Who wins the AI race — and what does it mean?

By 2025, AI is no longer a purely technical topic. It has become the central arena of geopolitical systemic competition. The question is not only who builds the most capable models — but who sets the global rules for AI.

The "Brussels effectBrussels effectWhen EU laws become so influential that companies and countries outside the EU voluntarily comply with them — because they cannot afford to lose access to the EU market." — does it work for AI?

The EU proved with the GDPR that one region can set global data protection standards — even US companies had to adapt. The hope: the same will happen with the AI Act. Whoever wants access to the EU single market must follow EU rules. Countries such as Brazil, India and several African states have already introduced similar AI law drafts that are structurally similar to the AI Act. But the difference from the GDPR is significant: AI is not primarily about data protection — it is a strategic technology field in which competitiveness is directly at stake.

The US — first mover defending its lead

The US remains unchallenged at the top in 2025: 74% of global AI supercomputer capacity, all ten of the world's most valuable AI companies, 40 significant models in 2024 alone. The Stargate deal, Anthropic's billion-dollar investments and Nvidia's market dominance structurally secure this lead. But: DeepSeek has shown that this lead is compressible. And the absence of clear federal rules makes US companies vulnerable — not from foreign regulation, but from domestic legal uncertainty.

China — catching up, but with weaknesses

China is closing the technology gap faster than expected. On two critical benchmarksBenchmarksStandardised tests used to compare AI models — e.g. in mathematics, programming or language comprehension. The higher the score, the more capable the model. (MMLU, HumanEval), Chinese models have achieved near parity. In AI patents, China leads with almost 70% of all global filings. In computer visionComputer visionA subfield of AI in which computers learn to "see" and understand images and videos — e.g. facial recognition, object detection or autonomous driving., China dominates research conferences. And DeepSeek has shown that US chip embargoes cannot withstand innovative workarounds.

At the same time, structural weaknesses are becoming apparent: up to 80% of newly built Chinese data centres stood partly idle in 2025, poorly positioned for current AI workloads. And fundamentally: Chinese AI cannot address politically sensitive topics — a massive disadvantage for global acceptance.

3
Significant AI models from EU institutions in 2024 (vs. 40 from the US)
60%
US share of global data centre capacity
70%
China's share of global AI patent filings
07 · What does this mean for us?

AI regulation affects everyone — concretely

Regulatory debates can feel abstract. But the three systems have direct consequences for everyday life — depending on which AI services you use and in which country you live.

As an EU citizen

From August 2026, high-risk AI systems in areas such as credit decisions, job applications, access to education and law enforcement will be subject to strict transparency and explanation obligations. You have a right to know whether an automated decision has been made about you. AI-generated content (deepfakesDeepfakesConvincingly realistic AI-generated videos or images of people saying or doing things that never happened. Often used for manipulation or disinformation., synthetic images) must be clearly labelled. Emotion recognition systems in the workplace are banned as a rule.

As a US user

You live in a patchwork: California's extensive AI laws apply to you if you are in California; Colorado's discrimination protections if you live there — but no uniform federal rules. AI companies can train on your data with almost no restrictions, unless a state law applies. The Trump administration is actively trying to sabotage stronger state protection laws.

As a user of Chinese AI

DeepSeek and other Chinese models are technically impressive — and politically censored. Those who use them potentially hand data to companies with close ties to the Communist Party. At the same time, US services such as ChatGPT and Claude are not officially accessible in China. Chinese AI users live in a parallel ecosystem — technologically competitive, politically sealed off.

🌐
The global AI governance problem: None of the three powers has a solution for cross-border AI systems. If a US model discriminates against an EU user, or a Chinese model is deployed in Europe — which law applies? International AI governance is still in its infancy. The UN process on AI, the OECD principles and the G7 Hiroshima Framework are voluntary and lack enforcement mechanisms.

🔮 Conclusion: the race is open — the rules are not yet set

We are in the middle of a historic experiment: three of the world's most powerful states are simultaneously testing three fundamentally different models of AI governance — in real time, with an uncertain outcome. The EU is making the risky attempt to position regulation as a competitive advantage. The US is betting on deregulation and dominance. China combines state control with massive investment. What is clear: whichever philosophy prevails will not only determine how AI is built — but whose values are embedded in the next generation of intelligent systems.

Sources & references
01
EU AI Act — Official European Commission page
Full documentation, implementation timeline and guidelines for the AI Act (Regulation (EU) 2024/1689)
digital-strategy.ec.europa.eu
02
EU AI Act Implementation Timeline — AI Act Service Desk
Complete timeline of all milestones in the AI Act implementation
ai-act-service-desk.ec.europa.eu
03
DLA Piper: Latest Wave of AI Act Obligations — August 2025
Legal analysis of the August 2025 milestones of the AI Act, including the penalty regime and GPAI rules
dlapiper.com
04
China's Interim Measures for Generative AI — China Law Translate (English translation)
Full text of the Chinese Generative AI Regulation of 15 August 2023 in English
chinalawtranslate.com
05
IAPP: Global AI Governance — China
Comprehensive analysis of the Chinese AI regulatory framework 2023–2025, including algorithm filing data
iapp.org
06
White House: Executive Order "Ensuring a National Policy Framework for AI" (Dec. 2025)
Original text of the Trump executive order on national AI policy and opposition to state AI laws
whitehouse.gov
07
Paul Hastings: Trump Executive Order Challenging State AI Laws
Legal analysis of the December 2025 executive order and its limits
paulhastings.com
08
Recorded Future: US-China AI Gap Analysis 2025
Detailed analysis of the technological gap between US and Chinese AI models, benchmarks, talent and infrastructure
recordedfuture.com
09
CSIS: DeepSeek, Huawei, Export Controls and the Future of the US-China AI Race
Strategic analysis of DeepSeek's significance, chip export controls and their effectiveness
csis.org
10
RAND: Full Stack — China's Evolving Industrial Policy for AI
Analysis of Chinese AI industrial policy, state investment and its effectiveness (revised June 2025)
rand.org
11
Federal Reserve: State of AI Competition in Advanced Economies (Oct. 2025)
Comparison of AI infrastructure, supercomputer capacity and data centre figures for the US, China and the EU
federalreserve.gov
12
Lawfare: Beyond DeepSeek — How China's AI Ecosystem Fuels Breakthroughs (Feb. 2025)
Analysis of the Chinese AI ecosystem: state funding, provincial competition and private actors
lawfaremedia.org