Regulation, dominance or state control?
Few topics shape geopolitics in 2025 more than artificial intelligence. But what "correct" AI policy actually means is something the three largest economic powers fundamentally disagree on. The EU has passed the world's first comprehensive AI law — and now fears being seen as hostile to innovation. The US under Trump immediately scrapped Biden's strictest AI executive order on taking office and is betting on maximum deregulation to beat China. China also regulates — but differently: the focus there is less on fundamental rights and more on what AI is allowed to say and whom it serves.
- Risk-based tiered model
- Prohibited AI practices since Feb. 2025
- Fines up to 7% of annual turnover
- Focus: fundamental rights & transparency
- AI Office in Brussels as overseer
- No federal law; executive orders only
- Trump approach: deregulation, dominance
- State laws actively challenged
- Focus: economy & national security
- $500bn Stargate investment plan
- World's first generative AI rules (2023)
- Mandatory: registration with authorities
- Content must conform to "socialist values"
- Focus: state control & leadership claim
- Goal: global AI leadership by 2030
The world's first AI law — and its teeth
On 1 August 2024, the EU AI ActAI ActThe European Union's AI law — officially "Regulation (EU) 2024/1689". The world's first comprehensive law regulating artificial intelligence. (Regulation (EU) 2024/1689) entered into force. The world's first comprehensive AI law is no experiment — it applies directly in all 27 member states, without national implementing legislation. The approach is risk-based: the higher the potential harm a system could cause, the stricter the requirements.
What has applied since February 2025
Since 2 February 2025, certain AI practices have been completely banned: social scoringSocial scoringA system in which the state rates citizens' behaviour with points — and restricts or grants rights depending on the score. Partially a reality in China. by public authorities, emotion recognition in the workplace or educational institutions, remote biometric surveillanceRemote biometric surveillanceAutomated facial recognition via camera in public spaces — e.g. at train stations or squares. Identifies individuals by their face without their knowledge. in public spaces (with narrow exceptions), and AI systems that manipulate behaviour through subliminal influence. At the same time, AI literacy obligations have applied since then — companies must ensure their employees can meaningfully use and understand AI systems.
Since August 2025: GPAI rules and penalties
Since 2 August 2025, the rules for so-called General-Purpose AI (GPAI)GPAI"General-purpose AI" — large foundation models not built for a single purpose, but capable of handling many tasks. Examples: ChatGPT, Claude, Gemini. models — i.e. large foundation models such as ChatGPT, Gemini or Claude — have taken effect. Providers must meet transparency and copyright obligations. Models with "systemic risks" (from approximately 10²⁵ floating-point operations for training) are subject to even stricter requirements: risk assessments, adversarial testingAdversarial testingTargeted stress tests in which security researchers try to trick an AI or provoke misbehaviour — to find weaknesses before others do., and incident reporting obligations.
At the same time, the full penalty regime has been in force since August 2025: up to €35 million or 7% of global annual turnover for violations of prohibited practices. €15 million or 3% for other violations. The Commission's AI Office in Brussels coordinates enforcement at EU level, while national authorities carry out primary supervision.
The innovation dilemma
The AI Act has been criticised for disadvantaging European AI companies relative to US and Chinese competitors. France, Germany and other countries lobbied for weaker rules for GPAI models during the legislative process — fearing that Mistral and other European AI firms would be stifled. Even after the law was passed, the Commission is trying to simplify implementation and extend deadlines through its "Digital Omnibus" package. The fact remains: in 2025, US companies published 40 significant AI models; European institutions managed just three.
"Minimally Burdensome" — deregulation as state doctrine
No country has reversed its AI course as drastically as the United States. Joe Biden had signed one of the most comprehensive AI executive orders in history in October 2023: safety standards, anti-discrimination measures, requirements for government agencies. Donald Trump cancelled it on the first day of his second term, 20 January 2025 — the same day DeepSeek R1 was released.
The Stargate project and $500 billion
Shortly after taking office, Trump announced the Stargate project jointly with OpenAI, SoftBank and Oracle: an AI infrastructure initiative worth $500 billion — the largest such investment in US history. The first step: $100 billion immediately, to build data centres across the US. The logic: whoever owns the infrastructure sets the AI rules.
Executive Order December 2025: putting states under pressure
On 11 December 2025, Trump signed a further executive orderExecutive OrderA directive from the US President with immediate effect — without congressional approval. It can instruct agencies, but has limits: it cannot repeal existing laws. that is radical in its scope. Its aim: to override state AI laws that conflict with the federal interest. The measures in detail:
- AI Litigation Task Force — the Justice Department is to challenge state AI laws in court, particularly Colorado's AI law (anti-discrimination), which is deemed "ideologically biased"
- Funding withdrawal — states with "burdensome" AI laws lose eligibility for federal broadband funding
- FCCFCCFederal Communications Commission — the US regulator for telecommunications and media. Regulates broadcasting, internet and technical standards. proceedings — within 90 days, a nationwide transparency requirement for AI systems is to be developed that supersedes state requirements
- FTCFTCFederal Trade Commission — the US consumer protection agency. Can sue companies that violate data protection or consumer rights. statement — the FTC is to clarify when federal law overrides state AI disclosure requirements
A federal law that could formally override state AI rules failed in Congress in 2025 — the Senate voted 99:1 against a 10-year moratorium on new state AI laws. The executive order tries to achieve the same thing through the back door. Whether this holds up legally is unclear: executive orders cannot directly repeal state laws.
No AI without a chip war
US AI policy is inseparably linked to export controls. The Biden administration had progressively tightened restrictions on exporting Nvidia high-performance chips to China. Trump tightened this policy further in 2025. The result: China officially cannot access the world's best AI accelerators. This pressure has — paradoxically — fuelled Chinese innovation. DeepSeek was developed without access to the latest H100 chips and proved that algorithmic efficiency can partially compensate for hardware disadvantages.
The world's first generative AI law — and what it really wants
On 15 August 2023, China's "Interim Measures for the Management of Generative Artificial Intelligence Services" entered into force — the world's first binding law specifically for generative AI. China was faster than the EU. But the motivation is different: whereas the AI Act tries to protect people from AI, China's regulation is designed to ensure that AI serves the state.
The "trio" regulation
China does not regulate AI with a single law, but with a system of three interlocking regulations: the Algorithmic Recommendation Measures (Jan. 2022), the Deep Synthesis Measures (deepfakes, Jan. 2023) and the Generative AI Measures (Aug. 2023). Together they cover the entire AI value chain — from recommendations through synthetic content to large language models.
In March 2025, a requirement to label AI-generated content followed (effective September 2025): texts, audio, images, videos and virtual scenarios must be explicitly or implicitly marked as AI-generated. At the same time, the National People's Congress is planning a comprehensive national AI law — concrete drafts were announced in 2025.
State strategy: leadership by 2030
Alongside regulation, China is pursuing the most aggressive state AI strategy of the three powers. As early as 2017, the State Council adopted the "Next Generation AI Development Plan" with the goal of making China the global AI superpower by 2030. The budget: 1 trillion yuan (~$140 billion) in technology investment by 2030.
Local governments compete with each other for AI talent and companies. Cities such as Beijing, Shanghai and Hangzhou award funding contracts, tax concessions and the coveted "National High-Tech Enterprise" status — most recently to DeepSeek, following its surprise success in January 2025.
DeepSeek: the Sputnik moment
On 20 January 2025, the day of Trump's second inauguration, the Chinese startup DeepSeek released its model R1 — an open-sourceOpen sourceSoftware whose source code is publicly accessible. Anyone can use, inspect and develop it further — as opposed to closed systems like GPT-4. model that matches GPT-4 on several benchmarks. The reaction was immediate: Nvidia's share price fell 17%, with market capitalisation dropping by over $600 billion in a single day. Trump called it a "wake-up call". Media outlets compared it to the Sputnik shock of 1957.
What AI in China is not allowed to do
Chinese users experience AI in a fundamentally different way. Questions about Tiananmen 1989, Taiwan as an independent state, Xinjiang internment camps or political leadership are consistently blocked or deflected by Chinese models. DeepSeek, though technically impressive, is just as censored in these areas as state platforms. The tension: technical openness and political control coexist — what is intended for international use is deployedDeployedFrom the English "deploy" — means: a piece of software or an AI model is made available on servers and accessible to users. outside China, where different rules apply.
Three systems, one table
| Criterion | 🇪🇺 EU | 🇺🇸 US | 🇨🇳 China |
|---|---|---|---|
| Legal framework | Comprehensive law (AI Act), directly applicable | No federal law; executive orders + state laws | Several specific regulations; national law planned |
| Core principle | Risk-based — more risk = more obligations | Innovation-based — minimal restrictions | Security-based — state security & content control |
| Prohibitions | Social scoring, mass biometric surveillance, manipulation | No explicit AI prohibitions at federal level | Content against "socialist values", political subversion |
| Penalties | Up to €35M / 7% of turnover | No AI-specific federal penalties | Shutdown, licence revocation, criminal referral possible |
| Oversight | EU AI Office + national authorities | FTC, EEOCEEOCEqual Employment Opportunity Commission — the US anti-discrimination agency for the workplace. Examines whether AI unfairly filters job applications, for example., FCC depending on context; DOJDOJDepartment of Justice — the US federal law enforcement agency. Prosecutes violations of federal law and can bring AI-related cases. AI Task Force | CACCACCyberspace Administration of China — China's central internet and AI supervisory authority. Controls which AI models are approved and what content is permitted. (Cyberspace Administration) + 6 further ministries |
| Investment strategy | AI Factories, €1.3bn funding programme | $500bn Stargate (private/public), CHIPS Act | ¥1tn state-planned by 2030, provincial competition |
| Fundamental rights focus | Strong — discrimination, dignity, privacy | Moderate — free speech, anti-bias in federal agencies | Weak — fundamental rights subordinated |
| Model market | Mainly international providers; Mistral (FR) as EU representative | OpenAI, Anthropic, Google, Meta dominate globally | Baidu, Alibaba, DeepSeek, ByteDance — state-supported |
| Openness | Open to international providers under AI Act conditions | Export controls against China; otherwise open | Registration required; foreign services effectively blocked |
Who wins the AI race — and what does it mean?
By 2025, AI is no longer a purely technical topic. It has become the central arena of geopolitical systemic competition. The question is not only who builds the most capable models — but who sets the global rules for AI.
The "Brussels effectBrussels effectWhen EU laws become so influential that companies and countries outside the EU voluntarily comply with them — because they cannot afford to lose access to the EU market." — does it work for AI?
The EU proved with the GDPR that one region can set global data protection standards — even US companies had to adapt. The hope: the same will happen with the AI Act. Whoever wants access to the EU single market must follow EU rules. Countries such as Brazil, India and several African states have already introduced similar AI law drafts that are structurally similar to the AI Act. But the difference from the GDPR is significant: AI is not primarily about data protection — it is a strategic technology field in which competitiveness is directly at stake.
The US — first mover defending its lead
The US remains unchallenged at the top in 2025: 74% of global AI supercomputer capacity, all ten of the world's most valuable AI companies, 40 significant models in 2024 alone. The Stargate deal, Anthropic's billion-dollar investments and Nvidia's market dominance structurally secure this lead. But: DeepSeek has shown that this lead is compressible. And the absence of clear federal rules makes US companies vulnerable — not from foreign regulation, but from domestic legal uncertainty.
China — catching up, but with weaknesses
China is closing the technology gap faster than expected. On two critical benchmarksBenchmarksStandardised tests used to compare AI models — e.g. in mathematics, programming or language comprehension. The higher the score, the more capable the model. (MMLU, HumanEval), Chinese models have achieved near parity. In AI patents, China leads with almost 70% of all global filings. In computer visionComputer visionA subfield of AI in which computers learn to "see" and understand images and videos — e.g. facial recognition, object detection or autonomous driving., China dominates research conferences. And DeepSeek has shown that US chip embargoes cannot withstand innovative workarounds.
At the same time, structural weaknesses are becoming apparent: up to 80% of newly built Chinese data centres stood partly idle in 2025, poorly positioned for current AI workloads. And fundamentally: Chinese AI cannot address politically sensitive topics — a massive disadvantage for global acceptance.
AI regulation affects everyone — concretely
Regulatory debates can feel abstract. But the three systems have direct consequences for everyday life — depending on which AI services you use and in which country you live.
As an EU citizen
From August 2026, high-risk AI systems in areas such as credit decisions, job applications, access to education and law enforcement will be subject to strict transparency and explanation obligations. You have a right to know whether an automated decision has been made about you. AI-generated content (deepfakesDeepfakesConvincingly realistic AI-generated videos or images of people saying or doing things that never happened. Often used for manipulation or disinformation., synthetic images) must be clearly labelled. Emotion recognition systems in the workplace are banned as a rule.
As a US user
You live in a patchwork: California's extensive AI laws apply to you if you are in California; Colorado's discrimination protections if you live there — but no uniform federal rules. AI companies can train on your data with almost no restrictions, unless a state law applies. The Trump administration is actively trying to sabotage stronger state protection laws.
As a user of Chinese AI
DeepSeek and other Chinese models are technically impressive — and politically censored. Those who use them potentially hand data to companies with close ties to the Communist Party. At the same time, US services such as ChatGPT and Claude are not officially accessible in China. Chinese AI users live in a parallel ecosystem — technologically competitive, politically sealed off.
🔮 Conclusion: the race is open — the rules are not yet set
We are in the middle of a historic experiment: three of the world's most powerful states are simultaneously testing three fundamentally different models of AI governance — in real time, with an uncertain outcome. The EU is making the risky attempt to position regulation as a competitive advantage. The US is betting on deregulation and dominance. China combines state control with massive investment. What is clear: whichever philosophy prevails will not only determine how AI is built — but whose values are embedded in the next generation of intelligent systems.