Pentagon and Anthropic Clash Over Military Use of Claude AI Models

Last Updated: February 16, 2026By Tags: ,

Tensions are reportedly rising between the U.S. Department of Defense andAnthropic over how the company’s Claude AI models can be used by the military.

According to Axios, the Pentagon is pressing AI firms to permit usage for all lawful military purposes.

The demand is said to extend to other major AI developers, including OpenAI, Google, and xAI, with varying levels of compliance.

Anthropic, however, is reportedly the most resistant, prompting threats that a two-hundred-million-dollar government contract could be withdrawn.

Previous reporting indicated disagreements over whether Claude could be used in active military operations.

While the Wall Street Journal later reported that the model was involved in an operation targeting Venezuelan President Nicolas Madiro, Anthropic has disputed claims of operational involvement.

A company spokesperson told Axios that Anthropic’s position centers on strict limits against fully autonomous weapons and mass domestic surveillance, rather than opposition to all government use.

The standoff underscores the broader challenge of aligning commercial AI development with military objectives amid growing global security concerns.

Source: TechCrunch

Mail Icon

news via inbox

Get the latest updates delivered straight to your inbox. Subscribe now!