WASHINGTON: The U.S. Pentagon has designated artificial intelligence company Anthropic a “supply chain risk,” effectively barring its technology from use in U.S. military systems and forcing defense contractors to phase out the company’s tools.
The decision, ordered by President Donald Trump and announced by Defense Secretary Pete Hegseth in late February, follows a dispute over how the military can use Anthropic’s AI model, Claude. Federal agencies were also directed to cease using the company’s technology, marking an unprecedented move against a major U.S. AI developer.
A supply-chain-risk designation typically allows the government to restrict vendors from sensitive contracts when they are considered security vulnerabilities. In this case, the Pentagon argued Anthropic’s internal safeguards on how its AI can be used could interfere with defense operations.
Officials said the conflict centered on the company’s refusal to remove policies preventing its systems from being used for mass surveillance or fully autonomous weapons.
Hegseth defended the move, saying the government would not allow restrictions imposed by private companies to limit military operations. “America’s warfighters will never be held hostage by the ideological whims of Big Tech,” he said.
Anthropic has rejected the characterization and signaled it will challenge the decision in court. The company has said it could not accept demands to lift its safeguards on autonomous weapons and surveillance.
The designation could have wide implications for the AI industry and government procurement. Contractors that do business with the Pentagon may now be required to remove Anthropic’s technology from defense projects, including systems that previously relied on the Claude model.
The clash underscores growing tensions between Silicon Valley firms developing advanced AI systems and the U.S. military’s push to integrate the technology more deeply into national defense.
