WASHINGTON: A bitter dispute between the Pentagon and AI company Anthropic has deepened into a high-stakes public battle over the future of artificial intelligence in U.S. national security, with consequences rippling through technology markets and military partnerships.
he standoff began in February when Defense Secretary Pete Hegseth demanded Anthropic remove built-in safety guardrails from its Claude AI model to give the U.S. military unrestricted access for “all lawful uses.” The company refused, citing ethical objections to using Claude for fully autonomous weapons or mass domestic surveillance, risks it says could exceed current technological safety capabilities.
After a Feb. 27 deadline passed without agreement, President Donald Trump ordered federal agencies to stop using Anthropic’s technology, calling the company a “national security supply-chain risk,” and the Pentagon moved to sever its roughly $200 million contract.
Before the Pentagon’s ban on Anthropic’s technology took effect, Claude — the company’s advanced AI model — was deeply embedded in U.S. military operations and reportedly helped inform planning for the strikes on Iran. Reports from The Wall Street Journal and other outlets indicate that U.S. commanders relied on Claude hours after President Trump ordered federal agencies to cease using Anthropic’s AI, with the model assisting in processing intelligence, identifying potential targets and running battlefield simulations during the coordinated U.S.-Israeli air campaign. Claude had already been the only commercial AI approved for use on classified Department of Defense systems through a partnership with Palantir Technologies.
Anthropic has vowed to challenge the designation in court, arguing it is “legally unsound,” and insisted its safeguards are essential. In response to the backlash, the company’s Claude app has surged in popularity, rising to No. 1 on the U.S. Apple App Store, overtaking rivals including OpenAI’s ChatGPT.
nalysts say the fight highlights broader tensions over how far developers should police government use of AI in warfare and surveillance — in the absence of clear congressional regulation. As one defense policy expert put it, the dispute underscores that the nation’s AI governance framework “is being shaped more by private negotiations than by public law.”
ndustry shifts are already underway: OpenAI secured a new classified-use agreement with the Pentagon, signaling a strategic realignment within the U.S. defense tech ecosystem.
