It was just a small news item. But let's take a close look at what it actually means — and whether we're closer to Skynet than anyone wants to admit.
The Pentagon–OpenAI Agreement
How it came about
Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon with a clear demand: Anthropic should remove all contract clauses prohibiting the use of AI for mass surveillance of US citizens and fully autonomous weapons. When Anthropic refused, Hegseth threatened to classify the company as a "supply-chain risk."[13]
Shortly after, negotiations between Anthropic and the Pentagon collapsed entirely — and OpenAI immediately signed its own deal.[02]
What does the Pentagon want to use OpenAI's AI for?
OpenAI's contract allows the US military to deploy its AI models for classified military operations — for "all lawful purposes." This is unprecedented for OpenAI, which had previously only worked on unclassified government projects.[03]
OpenAI's three "red lines"
OpenAI insists on three contractual limits:[01][03]
- No mass surveillance of US citizens
- No fully autonomous weapons systems without human oversight
- No high-risk autonomous decisions without human authorization
Altman admits mistakes
"We frankly wanted to de-escalate the situation, but it looked opportunistic and sloppy."
— Sam Altman, OpenAI CEO, internally to his team [02]Internal fallout at OpenAI
OpenAI's head of robotics, Caitlin Kalinowski, resigned in protest:[05]
"Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more discussion."
— Caitlin Kalinowski, former head of OpenAI Robotics [05]What "autonomous weapons" actually means
Definition: Autonomous weapons
The UN definition: An autonomous weapons system is a weapon that can independently identify, select, and engage a target — without any human intervention.[28]
The Pentagon cites specific scenarios: automated defense lasers that shoot down incoming drones without a soldier pulling the trigger; drone swarms; submarine robots; automated missile defense; and space-based systems — all AI-controlled, with no human sign-off in individual cases.[13]
Pentagon technology chief Emil Michael views Anthropic's ethical restrictions as an "irrational obstacle": the military needs AI for autonomous drones and vehicles to keep pace with China.[13]
"Today's AI systems are nowhere near reliable enough to operate fully autonomous weapons. Anyone who has worked with AI models understands that there is a fundamental unpredictability that has not yet been solved technically."
— Dario Amodei, Anthropic CEO [12]What already exists — and what is being built right now
Systems that already kill
A fully autonomous loitering munition launched without precise prior targeting data. It independently searches for radar targets, selects them, and attacks — without any further human input.[18]
In March 2020, a Turkish kamikaze drone in Libya independently hunted and attacked a human target. According to a UN report, this was the first documented instance worldwide of an autonomous weapon attacking a human without a human command.[32]
Fired live missiles at a target for the first time without tactical human control. Once the command was issued, AI took full control — without any further human intervention until detonation.[34]
What is currently being built and tested
- Anduril YFQ-44A (USA): Semi-autonomous combat jet, first flown on October 31, 2025. Designed to work alongside human pilots, but capable of independently engaging targets.[10]
- ASGARD (UK): Intended to reduce the time from target detection to attack decision to under one minute — making the army, by its own account, "10 times more lethal." Scheduled completion: 2027.[11]
- Uranos AI (Germany): Plans for its own autonomous system — already from 2026.[11]
- Helsing — Autonomous submarine drones (Europe): Designed to dive up to 3,000 feet and operate for 90 days without human control.[11]
Ukraine: The real-world testing ground
Ukraine has scaled its drone production from 2.2 million (2024) to 4.5 million units in 2025. Ukrainian forces already use dozens of AI-assisted systems that autonomously navigate drones to targets without human pilots — including in areas with heavy electronic warfare.[11][14]
International situation: 156 against 5
156 nations voted in the UN General Assembly in November 2025 for a binding treaty to regulate autonomous weapons. Only five countries voted against it — explicitly including the United States and Russia.[33]
What researchers describe as worst-case scenarios
Scenario 1: The "Flash War" — war by misunderstanding
RAND researchers found in war games that the speed of autonomous systems led to unintended escalation.[25] The UN Institute for Disarmament Research confirms: widespread AI could lead to "unintended escalation and crisis instability."[22]
A concrete example: an autonomous patrol drone firing warning shots near a border could be misread as the beginning of combat operations, triggering counter-strikes — even though it was only conducting routine border surveillance.
Scenario 2: AI escalates to nuclear weapons
A study by King's College London pitted GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash against each other in war games.[08][09] Each model played the leader of a nuclear power in a crisis situation.
- In 95% of the 21 simulated scenarios, nuclear weapons were deployed
- All three models treated tactical nuclear weapons as a normal step on the escalation ladder
- The AI models actively engaged in deception — saying one thing and doing another
- Claude recommended nuclear strikes in 64% of games — the highest rate among the three models
- In not a single game were all eight available de-escalation options used
Scenario 3: Hackers trigger a nuclear strike
Researchers describe the concept of "catalytic nuclear war": a non-state actor — terrorists, criminal groups, or state proxies — could manipulate AI-powered military systems to trigger a nuclear exchange between two major powers, without being directly involved themselves.[27]
Russia is integrating AI into its missile forces — creating new attack vectors for so-called "integrity attacks," in which an AI system is trained on falsified data. Third parties could thereby use Russia's systems to trigger a strike against the United States — with obscured responsibility.[30]
Scenario 4: "Moral hazard" — wars become easier
Thousands of AI researchers warn: by eliminating personal risk, accountability, and the difficulty of killing, autonomous weapons could become powerful instruments of violence and oppression.[18] Researchers speak of a "moral hazard": when no one can be held personally responsible for civilian deaths — neither the machine, nor the developer, nor the commander — violations of international humanitarian law become "procedurally inevitable but legally unprosecutable."[17]
Scenario 5: The proliferation window is closing
The Future of Life Institute describes autonomous weapons as "the third revolution in warfare — after gunpowder and nuclear weapons."[18] Unlike nuclear weapons, autonomous weapons systems can be developed and tested in secret. Experts call the current moment the "pre-proliferation window" — the last historical moment before autonomous weapons become as widespread and uncontrollable as small arms.[33]
Historical warning: The Petrov Incident (1983)
AI hallucinations & the end of human control
What a hallucination means in a weapons context
In everyday use, an AI hallucinates when it cites a non-existent source. In a weapons context, it means: faulty training data leads to bias, vulnerability, and misalignment in target identification and selection.[26] The central question becomes: Can we accept that killings are the product of a "glitch" or a "hallucination"?
The "human in the loop" problem is largely an illusion
Israel's AI-assisted targeting system in the Gaza war uses hundreds of thousands of data points to classify targets — too complex to be meaningfully questioned in real time when decisions must be made in minutes or seconds. The human operator essentially presses a "go" button without truly understanding what the system is doing or why.[29]
The speed problem: humans can't keep up
In a Pentagon test in 2020, sensors tracked simulated enemy forces, AI computers processed the data, and issued artillery commands — the entire sequence took 20 seconds. Within that timeframe, meaningful human control is factually impossible.[35]
"Unintended escalations can occur when systems do not function as expected, through untested interactions between AI systems on the battlefield — or simply because machines or humans misperceive signals. AI-powered systems will increase the pace of warfare and reduce the space for de-escalating measures."
— US National Security Commission on AI [22][35]The discrimination problem: civilian or combatant?
Because AI inherits unintended biases from its training data, the criteria for who qualifies as a combatant or target will likely include factors such as gender, age, skin color, and physical ability — according to a preliminary advisory opinion from a UN expert group.[32] Even if a supposedly low error rate could be confirmed: AI's capacity to deliver targets at an unprecedented pace would still endanger thousands of civilian lives.
The accountability vacuum
When an autonomous weapon makes a mistake, responsibility is distributed across engineers, operators, commanders, and component manufacturers — with no one clearly liable. International humanitarian law requires that individuals can be held legally responsible for war crimes. With autonomous systems, this is structurally impossible.[17][22]