top of page

AI in Warfare: Technology, Ethics, and the Question of Accountability

Natalie Cockrell

Mar 3, 2026

As U.S. strikes on Iran made headlines, so did a quieter but consequential conflict: the Pentagon vs. the AI companies powering its military operations.

This past weekend, amid U.S. strikes on Iran, reporting confirmed that AI systems were deeply embedded in U.S. military planning and execution. At the same time, a public dispute between the Pentagon and two of the leading AI companies, Anthropic and OpenAI, forced a bigger question into view: when companies set ethical limits for military AI, who is actually accountable?

The U.S. military has used AI for years in areas like logistics, surveillance, and drone operations. What has changed is the level of integration of AI. A key element of this shift is the concept of a "kill chain" which is the sequence of steps from identifying a target to authorizing action against it. Traditionally, each step of the chain required human judgment. AI is being integrated into these steps to identify targets, cross-reference intelligence, and model outcomes. The unanswered question is how much of this chain can or should be left to the algorithm.

In late February 2026, the Pentagon gave Anthropic, the company behind Claude AI, a deadline to adjust its usage restrictions to enable broader military applications. Anthropic refused, drawing a firm line at two cases in which it would not allow its software to be used: mass domestic surveillance and fully autonomous weapons. CEO Dario Amodei stated publicly that AI used in these ways would only undermine, not defend, democratic values.

The core of the dispute centered on the Pentagon's demand for "any lawful use" terms, effectively asking Anthropic to remove its own ethical guardrails and defer to whatever federal law permits. Amodei's statement addressed this directly: "The Department of War has stated they will only contract with AI companies who accede to 'any lawful use' and remove safeguards in the cases mentioned above." Anthropic's refusal was, at its core, a rejection of that framing. The company maintained that its own usage policies should set a higher bar than legal compliance alone.

The Pentagon responded by threatening to designate Anthropic a "supply chain risk," which is a term typically used only for foreign vendors that pose cybersecurity threats. Legal experts noted that this was the first time this designation appeared to be used against an American company as a punitive measure for refusing to accept contract terms.

Following the deadline given to Anthropic, OpenAI secured the defense contract. The company stated it would maintain certain limits, including a ban on "high-stakes automated decisions." However, this marked a notable shift in their policies. In January 2024, OpenAI quietly removed language from its usage policy that explicitly prohibited military and warfare applications, an unannounced change, first reported by The Intercept and confirmed by TechCrunch, that went live on January 10, 2024, and opened the door to the partnerships it now holds.

According to the Wall Street Journal, Claude AI was used by U.S. Central Command during the Iran strikes for target identification and prioritization, large-scale intelligence analysis, and battle simulation to model strike sequences. This deployment was made possible through partnerships with Palantir Technologies and Amazon Web Services. The strikes took place on the same days as the Pentagon's public dispute with Anthropic, underscoring how quickly these decisions are moving. Notably, reports indicate that replacing Claude across Pentagon infrastructure could take three to six months, meaning continued use through much of mid-2026 is likely regardless of the ban.

The concerns surrounding AI in warfare are systemic.

Accountability gap. When AI assists a strike that kills the wrong person, who takes responsibility? There is no clear legal framework to answer this.

Lowering the conflict threshold. When machines take most of the risk of warfare, the political costs of conflict can decrease, potentially making armed action much easier to authorize.

Bias and error. AI models trained on historical data can encode biases. In a targeting context, this could result in disproportionate harm to particular populations, and unlike a human analyst, an AI system cannot easily explain its reasoning.

Fully Autonomous Weapons. The most consequential problem is the possibility of a system that identifies, selects, and engages targets without human involvement. No binding international treaty prohibits this. UN discussions on Lethal Autonomous Weapons Systems (LAWS) have been ongoing since 2014, with no enforceable agreements produced.

Supporters argue that AI can improve targeting precision and reduce civilian casualties, that it protects soldiers by replacing humans in dangerous roles, and that democratic nations must develop these capabilities before authoritarian ones do. The Pentagon's position was that federal law already bars fully autonomous lethal systems without human involvement, making Anthropic's usage restrictions redundant, and that Anthropic's restrictions could "jeopardize critical military operations and potentially put our warfighters at risk."

Critics contend that existing prohibitions on AI in war are ineffective and poorly enforced, that embedding AI in the kill chain normalizes a trajectory toward full autonomy, and that accountability frameworks remain absent. The quiet removal of OpenAI's military prohibition is cited as evidence that commercial incentives, when left unchecked, will drive these decisions rather than purely ethical ones. Lawfare also noted that OpenAI's new contract language may amount to little more than "any lawful use" in practice, with restrictions tied to existing law and Defense Department policies that the government can change at any time.

The question now is not just if AI should be in warfare, because it already is. The questions that remain are which rules, oversight, and accountability methods can and should be applied. The dispute between Anthropic, the Pentagon, and OpenAI shows that these decisions are being made through contract negotiations between technology companies and government agencies, without public deliberation. Whether that process produces outcomes consistent with democratic values and international law is undeniably something that deserves serious public attention.


  • Instagram
  • Black Facebook Icon
  • Black Twitter Icon
  • Black LinkedIn Icon
bottom of page