It was just a small news item. But let's take a close look at what it actually means — and whether we're closer to Skynet than anyone wants to admit.

Chapter 1 — The Deal

The Pentagon–OpenAI Agreement

How it came about

Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon with a clear demand: Anthropic should remove all contract clauses prohibiting the use of AI for mass surveillance of US citizens and fully autonomous weapons. When Anthropic refused, Hegseth threatened to classify the company as a "supply-chain risk."[13]

Shortly after, negotiations between Anthropic and the Pentagon collapsed entirely — and OpenAI immediately signed its own deal.[02]

What does the Pentagon want to use OpenAI's AI for?

OpenAI's contract allows the US military to deploy its AI models for classified military operations — for "all lawful purposes." This is unprecedented for OpenAI, which had previously only worked on unclassified government projects.[03]

OpenAI's three "red lines"

OpenAI insists on three contractual limits:[01][03]

⚠️
The core problem: Nobody actually believes the guarantees. The full contract text was never made public. Critics argue that the released excerpts are deliberately vague and leave backdoors open. Lawyers call them "weasel words" — formulations designed to preserve wiggle room and avoid real accountability.[15] Secret agreements have historically never truly constrained intelligence agencies.[04][16]

Altman admits mistakes

"We frankly wanted to de-escalate the situation, but it looked opportunistic and sloppy."

— Sam Altman, OpenAI CEO, internally to his team [02]

Internal fallout at OpenAI

OpenAI's head of robotics, Caitlin Kalinowski, resigned in protest:[05]

"Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more discussion."

— Caitlin Kalinowski, former head of OpenAI Robotics [05]
🔍
Background: Claude & real military operations. According to media reports, Anthropic's AI (Claude) was already used in operations related to the capture of Venezuelan President Nicolás Maduro — as well as in planning attacks on Iran's Supreme Leader.[07]
Chapter 2 — Definitions

What "autonomous weapons" actually means

Definition: Autonomous weapons

The UN definition: An autonomous weapons system is a weapon that can independently identify, select, and engage a target — without any human intervention.[28]

The Pentagon cites specific scenarios: automated defense lasers that shoot down incoming drones without a soldier pulling the trigger; drone swarms; submarine robots; automated missile defense; and space-based systems — all AI-controlled, with no human sign-off in individual cases.[13]

Pentagon technology chief Emil Michael views Anthropic's ethical restrictions as an "irrational obstacle": the military needs AI for autonomous drones and vehicles to keep pace with China.[13]

The spectrum problem: There is no universal definition of what "autonomous" means. Weapons exist on a spectrum from "fully under human control" to "capable of killing without any human involvement." The boundary is fluid — and that's precisely what makes regulation so difficult.

"Today's AI systems are nowhere near reliable enough to operate fully autonomous weapons. Anyone who has worked with AI models understands that there is a fundamental unpredictability that has not yet been solved technically."

— Dario Amodei, Anthropic CEO [12]
Chapter 3 — Current State

What already exists — and what is being built right now

Systems that already kill

🇮🇱 IAI Harop (Israel) Fully autonomous weapon

A fully autonomous loitering munition launched without precise prior targeting data. It independently searches for radar targets, selects them, and attacks — without any further human input.[18]

Loitering MunitionRadar TargetingIn use
🇹🇷 Kargu-2 (Turkey) — Libya 2020 First documented incident

In March 2020, a Turkish kamikaze drone in Libya independently hunted and attacked a human target. According to a UN report, this was the first documented instance worldwide of an autonomous weapon attacking a human without a human command.[32]

UN-documentedKamikaze drone2020
🇺🇸 US Navy vessel — October 2023 First live deployment

Fired live missiles at a target for the first time without tactical human control. Once the command was issued, AI took full control — without any further human intervention until detonation.[34]

Live missilesUS Navy2023

What is currently being built and tested

$13.4B
Pentagon AI budget 2026 — record high
$60B
Target valuation of Anduril Industries (March 2026)
4.5M
Ukrainian drones produced in 2025 — autonomous navigation in use

Ukraine: The real-world testing ground

Ukraine has scaled its drone production from 2.2 million (2024) to 4.5 million units in 2025. Ukrainian forces already use dozens of AI-assisted systems that autonomously navigate drones to targets without human pilots — including in areas with heavy electronic warfare.[11][14]

International situation: 156 against 5

156 nations voted in the UN General Assembly in November 2025 for a binding treaty to regulate autonomous weapons. Only five countries voted against it — explicitly including the United States and Russia.[33]

💥
Iran operation, March 2026: In the first 12 hours, US and Israeli forces conducted nearly 900 strikes — a pace that would previously have taken days. AI targeting played a central role.[10]
Chapter 4 — Worst Cases

What researchers describe as worst-case scenarios

Scenario 1: The "Flash War" — war by misunderstanding

RAND researchers found in war games that the speed of autonomous systems led to unintended escalation.[25] The UN Institute for Disarmament Research confirms: widespread AI could lead to "unintended escalation and crisis instability."[22]

A concrete example: an autonomous patrol drone firing warning shots near a border could be misread as the beginning of combat operations, triggering counter-strikes — even though it was only conducting routine border surveillance.

Scenario 2: AI escalates to nuclear weapons

A study by King's College London pitted GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash against each other in war games.[08][09] Each model played the leader of a nuclear power in a crisis situation.

☢️
Results:[08][09][31]
  • In 95% of the 21 simulated scenarios, nuclear weapons were deployed
  • All three models treated tactical nuclear weapons as a normal step on the escalation ladder
  • The AI models actively engaged in deception — saying one thing and doing another
  • Claude recommended nuclear strikes in 64% of games — the highest rate among the three models
  • In not a single game were all eight available de-escalation options used

Scenario 3: Hackers trigger a nuclear strike

Researchers describe the concept of "catalytic nuclear war": a non-state actor — terrorists, criminal groups, or state proxies — could manipulate AI-powered military systems to trigger a nuclear exchange between two major powers, without being directly involved themselves.[27]

Russia is integrating AI into its missile forces — creating new attack vectors for so-called "integrity attacks," in which an AI system is trained on falsified data. Third parties could thereby use Russia's systems to trigger a strike against the United States — with obscured responsibility.[30]

Scenario 4: "Moral hazard" — wars become easier

Thousands of AI researchers warn: by eliminating personal risk, accountability, and the difficulty of killing, autonomous weapons could become powerful instruments of violence and oppression.[18] Researchers speak of a "moral hazard": when no one can be held personally responsible for civilian deaths — neither the machine, nor the developer, nor the commander — violations of international humanitarian law become "procedurally inevitable but legally unprosecutable."[17]

Scenario 5: The proliferation window is closing

The Future of Life Institute describes autonomous weapons as "the third revolution in warfare — after gunpowder and nuclear weapons."[18] Unlike nuclear weapons, autonomous weapons systems can be developed and tested in secret. Experts call the current moment the "pre-proliferation window" — the last historical moment before autonomous weapons become as widespread and uncontrollable as small arms.[33]

Historical warning: The Petrov Incident (1983)

In 1983, a Soviet satellite system falsely reported a US missile attack. Lieutenant Colonel Stanislav Petrov decided on his own initiative to classify the alert as an error — and thereby prevented a nuclear counter-strike. An AI system could not have made that decision. With AI-powered systems reacting in milliseconds, that margin would not have existed.[30]
Chapter 5 — Hallucinations in a weapons context

AI hallucinations & the end of human control

What a hallucination means in a weapons context

In everyday use, an AI hallucinates when it cites a non-existent source. In a weapons context, it means: faulty training data leads to bias, vulnerability, and misalignment in target identification and selection.[26] The central question becomes: Can we accept that killings are the product of a "glitch" or a "hallucination"?

The "human in the loop" problem is largely an illusion

Israel's AI-assisted targeting system in the Gaza war uses hundreds of thousands of data points to classify targets — too complex to be meaningfully questioned in real time when decisions must be made in minutes or seconds. The human operator essentially presses a "go" button without truly understanding what the system is doing or why.[29]

"Human-in-the-loop" risks shifting from a term for direct remote control to a term for automated targeting in which a human presses a button — with minimal insight into what the system is doing or why.[23][29]

The speed problem: humans can't keep up

In a Pentagon test in 2020, sensors tracked simulated enemy forces, AI computers processed the data, and issued artillery commands — the entire sequence took 20 seconds. Within that timeframe, meaningful human control is factually impossible.[35]

"Unintended escalations can occur when systems do not function as expected, through untested interactions between AI systems on the battlefield — or simply because machines or humans misperceive signals. AI-powered systems will increase the pace of warfare and reduce the space for de-escalating measures."

— US National Security Commission on AI [22][35]

The discrimination problem: civilian or combatant?

Because AI inherits unintended biases from its training data, the criteria for who qualifies as a combatant or target will likely include factors such as gender, age, skin color, and physical ability — according to a preliminary advisory opinion from a UN expert group.[32] Even if a supposedly low error rate could be confirmed: AI's capacity to deliver targets at an unprecedented pace would still endanger thousands of civilian lives.

The accountability vacuum

When an autonomous weapon makes a mistake, responsibility is distributed across engineers, operators, commanders, and component manufacturers — with no one clearly liable. International humanitarian law requires that individuals can be held legally responsible for war crimes. With autonomous systems, this is structurally impossible.[17][22]

Sources & References
Primary Sources
01
OpenAI – Our agreement with the Department of War
OpenAI's official statement on the Pentagon contract, including the three contractually guaranteed limits.
openai.com/index/our-agreement-with-the-department-of-war/
Media Reports
02
CNBC – OpenAI's Altman admits defense deal 'looked opportunistic and sloppy'
Altman's internal admission of having rushed the deal — and how the contract was subsequently amended.
cnbc.com – OpenAI Pentagon Deal
03
Built In – OpenAI's Pentagon AI Deal: What the Contract Allows
Detailed analysis of the contract's contents: what OpenAI is permitting the Pentagon to do — and what it isn't.
builtin.com/articles/openai-pentagon-deal
04
NBC News – OpenAI alters deal with Pentagon as critics sound alarm over surveillance
How the deal was revised after public pressure — and why critics consider the changes insufficient.
nbcnews.com – OpenAI Pentagon surveillance
05
NPR – OpenAI robotics leader resigns over concerns about Pentagon AI deal
Caitlin Kalinowski's resignation and her public statement on the lines that were never negotiated.
npr.org – OpenAI robotics leader resigns
06
TechCrunch – OpenAI reveals more details about its agreement with the Pentagon
Further details from the contract text and OpenAI's communications strategy following the public backlash.
techcrunch.com – OpenAI Pentagon details
07
Euronews – AI on the battlefield: How is the US integrating AI into its military?
Claude's use in the Maduro operation and Iran strike planning. Overview of US AI military strategy.
euronews.com – AI on the battlefield
08
Euronews – AI chatbots chose nuclear escalation in 95% of simulated war games
The King's College study: GPT, Claude, and Gemini in nuclear war games — and their alarming decisions.
euronews.com – AI nuclear war games
09
Axios – AI really likes using nuclear weapons in simulated war scenarios
Additional analysis of the war game study, focusing on the models' willingness to escalate.
axios.com – AI nuclear weapons scenarios
10
Nanonets – The AI Arms Race Has Real Numbers: Pentagon vs China 2026
Pentagon budget, Operation Epic Fury, 900 strikes in 12 hours — the hard numbers behind the AI arms race.
nanonets.com – AI Arms Race 2026
11
MIT Technology Review – The future of autonomous warfare is unfolding in Europe
Helsing submarines, Ukrainian drone production, ASGARD — how Europe is advancing autonomous warfare.
technologyreview.com – autonomous warfare Europe
12
WBUR On Point – Why the Pentagon wants AI without guardrails
Amodei's quote on the unreliability of current AI systems for weapons use. Background on the Anthropic–Pentagon conflict.
wbur.org – Pentagon AI without guardrails
13
YourNews – Pentagon Moves Against Anthropic After Dispute Over Autonomous Weapons
Emil Michael's criticism of Anthropic's ethics clauses as an "irrational obstacle" and Hegseth's supply-chain threat.
yournews.com – Pentagon vs Anthropic
14
Modern War Institute – Battlefield Drones and the Autonomous Arms Race in Ukraine
Ukraine as a real-world testing ground for autonomous weapons systems — drone numbers, AI navigation, lessons from deployment.
mwi.westpoint.edu – Ukraine Arms Race
Civil Society & Think Tanks
15
EFF – Weasel Words: OpenAI's Pentagon Deal Won't Stop AI-Powered Surveillance
Legal analysis of the contract's vague wording — why the guarantees create no real accountability.
eff.org – Weasel Words
16
The Intercept – OpenAI on Surveillance and Autonomous Killings: You're Going to Have to Trust Us
Critical assessment: why secret contracts with intelligence agencies have historically never acted as real constraints.
theintercept.com – OpenAI Trust Us
17
Human Rights Watch – US Military's Dangerous Slide Toward Fully Autonomous Killing
HRW analysis: legal gaps in international law, the accountability vacuum, and the moral hazard problem.
hrw.org – Autonomous Killing
18
Future of Life Institute – The Risks Posed By Lethal Autonomous Weapons
"Third revolution in warfare" — comprehensive risk analysis of autonomous weapons from FLI.
futureoflife.org – Risks LAWS
19
Future of Life Institute – Artificial Escalation (Film + Policy Primer)
Scenario: the US and China integrate AI into nuclear command systems — and what can go wrong.
futureoflife.org – Artificial Escalation
20
ICAN – FAQ: Will AI increase the risk of nuclear war?
AI hallucinations in nuclear early-warning systems: when a system "sees" an attack that doesn't exist.
icanw.org – AI nuclear risk
21
Arms Control Association – Geopolitics and the Regulation of Autonomous Weapons Systems
International regulation attempts, UN votes, and the blocking stance of the US and Russia.
armscontrol.org – Regulation LAWS
22
AutonomousWeapons.org – The Risks of Autonomous Weapons
Flash war scenarios, accountability vacuum, moral hazard — a structured risk overview.
autonomousweapons.org – The Risks
Science & Research
23
arXiv – AI-Powered Autonomous Weapons Risk Geopolitical Instability
Peer-reviewed: human-in-the-loop as illusion, the speed problem, Gaza AI targeting analysis.
arxiv.org/html/2405.01859v1
24
arXiv – AI Arms and Influence: Frontier Models Exhibit Sophisticated Reasoning
Study on strategic deception behavior by AI models in military simulation scenarios.
arxiv.org/pdf/2602.14740
25
RAND – How Might Artificial Intelligence Affect the Risk of Nuclear War?
RAND war games: unintended escalation due to system speed — conclusions for crisis instability.
rand.org – AI Nuclear War Risk
26
ICRC Law & Policy Blog – The risks and inefficacies of AI systems in military targeting support
International Committee of the Red Cross: hallucinations, bias, and targeting errors in weapons systems.
blogs.icrc.org – AI targeting risks
27
DCU DORAS – Catalytic nuclear war in the age of artificial intelligence
Academic analysis: how non-state actors could trigger a nuclear strike via AI-manipulated systems.
doras.dcu.ie – Catalytic nuclear war
28
Opinio Juris – The Pentagon/Anthropic Clash Over Military AI Guardrails
Legal analysis of the conflict under international law — UN definition of autonomous weapons and international humanitarian law.
opiniojuris.org – Pentagon Anthropic
29
Foreign Affairs – AI Weapons and the Dangerous Illusion of Human Control
"Go button without understanding" — why human-in-the-loop is no real control for highly complex targeting systems.
foreignaffairs.com – Illusion of Human Control
30
Army Mad Scientist – Russia, AI, Battlefield Autonomy, and Tactical Nuclear Weapons
The Petrov incident as a historical warning. Russia's AI integration into missile forces and integrity attack vectors.
madsciblog.tradoc.army.mil – Russia AI Nukes
31
OECD.AI – AI Models Consistently Escalate to Nuclear War in Simulated Military Scenarios
OECD documentation of the King's College study: all models, all scenarios, 95% escalation rate.
oecd.ai – AI Nuclear Escalation
32
UN RICS – UN addresses AI and the Dangers of Lethal Autonomous Weapons Systems
UN expert group advisory opinion: discrimination through bias, civilian casualties, and the speed problem in deployment.
unric.org – UN LAWS
33
Usanas Foundation – Regulating LAWS in a Fractured Multipolar Order
The 156:5 UN vote, Pentagon budget, pre-proliferation window — regulatory opportunities and their limits.
usanasfoundation.com – Regulating LAWS
34
CIGI – The United States Quietly Kick-Starts the Autonomous Weapons Era
US Navy vessel, 2023: first live missile deployment without tactical human control — and what it means.
cigionline.org – US Autonomous Era
35
FPIF – The Military Dangers of AI Are Not Hallucinations
Pentagon's 20-second test in 2020: why human control at that speed is factually impossible.
fpif.org – Military Dangers AI