Recent events in the AI sector have raised serious concerns about the future of AI safety.
A few years ago, AI companies, legislators, and the public largely agreed that strong regulation and oversight were necessary.
There was optimism that international rules could prevent the most dangerous AI applications, while companies promised to prioritize safety over profits.
Anthropic’s Pentagon Dispute Highlights Military AI Concerns
The feud between Anthropic and the United States Department of Defense demonstrates the tension between corporate safety policies and military interests.
Anthropic’s contract originally prevented the Pentagon from using its Claude AI models for autonomous weapons or domestic surveillance.
The Pentagon attempted to remove these restrictions, and Anthropic’s refusal ended the contract.
Secretary of Defense Pete Hegseth declared Anthropic a supply-chain risk, preventing other government agencies from working with the company.
This incident shows the military’s reluctance to accept limits on AI deployment within its operations.
From “Race to the Top” to a Competitive AI Market
Anthropic initially implemented the Responsible Scaling Policy, linking AI model releases to safety measures and aiming to inspire industry-wide safety standards.
The policy intended to encourage a “race to the top” and guide other companies to prioritize safety.
While DeepMind and OpenAI adopted parts of this framework, Anthropic later acknowledged that the policy fell short.
In a blog post, the company said, “The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level.”
Rivalries Between AI Companies Threaten Safety Initiatives
Competition between AI firms has intensified, affecting safety commitments.
After Anthropic’s Pentagon contract ended, OpenAI quickly signed its own agreement with the Department of Defense.
Anthropic CEO Dario Amodei criticized OpenAI CEO Sam Altman, saying, “Sam is trying to undermine our position while appearing to support it.”
OpenAI’s executives described the deal as a way to relieve pressure on Anthropic.
These conflicts illustrate the challenges of relying on industry self-regulation, especially when leaders cannot cooperate, and competition is fierce.
Companies Affirm AI Safety Efforts Despite Pressure
Anthropic and OpenAI insist that safety remains a priority despite the Pentagon dispute.
Anthropic’s chief science officer, Jared Kaplan, said, “There are a lot of researchers at every lab that care a lot about doing the right thing. They want to see their research used for the betterment of humanity, and I think there is competition not just to make them more useful or capable, but also safer.”
OpenAI highlighted the rise of AI safety organizations since ChatGPT’s launch and said its internal safety teams are larger than ever.
OpenAI strategy officer Jason Kwon explained that safety may seem less visible because attention has expanded to labor impacts, economic growth, and global AI distribution.
Nonetheless, company leaders acknowledge that the power and potential of AI make imposing restraints difficult, as noted by Amodei: “AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all.”
PHOTO: GETTY IMAGES/JUSTIN SULLIVAN/CHANCE YEH
This article is a summary of several original articles. The full versions can be read at the following links:
https://www.wired.com/story/when-ai-companies-go-to-war-safety-gets-left-behind/
https://fortune.com/2026/03/05/anthropic-openai-feud-pentagon-dispute-ai-safety-dilemma-personalities/
This article was created with AI assistance.
We make every effort to ensure the accuracy of our content, some information may be incorrect or outdated. Please let us know of any corrections at [email protected].
Read More

Wednesday, 18-03-26
