The Pentagon has intensified its standoff with Anthropic, requiring multiplied military access to its AI model, Claude. As per the reports, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei until Friday evening to offer the army with unrestricted access to to the model or face significant results.
The dispute moted a crucial moment within the developing relationship among national security agencies and frontier AI developers.
Pentagon Threatens Defense Manufacturing Act
During a aggravating meeting on Tuesday, Hegseth reportedly instructed Amodei that the Department of Defense would either cut ties with Anthropic and announce it a “supply chain risk” or invoke the Defense Manufacturing Act. The DPA permit the president to compel private corporations to prioritize contracts deemed important to national defense.
“The only purpose we’re still speaking to these people is we want them, and we need them now. The issue for these guys is they are that good,” a Defense official told Axios ahead of the assembly. Claude recently serves as the only AI model deployed within the military’s most touchy categorized systems. That reliance complicates any effort to sever ties.
Core Tension: Safeguards vs. Operational Freedom
Anthropic has said it’s inclined to adapt utilization policies to assist national protection missions. Moreover, the corporation has drawn corporations boundaries. It has refused to permit Claude to help mass surveillance of Americans or autonomous guns that operate without human oversight.
Hegseth reportedly informed Amodei that the Pentagon could not be accept a private corporation dictating use cases. He also raised claims that Anthropic flagged worries to its companion Palantir concerning Claude’s rolein the Venezuela “Maduro raid.” Amodei denied these ones claims and reiterated that Anthropic’s red lines have now not intruded with Pentagon operations.
A senior Defense official explained the meeting as “not warm and fuzzy at all.” Another source characterised it as cordial however organization, noting that Hegseth praised Claude’s capabilities.
Strategic Risk: Replacing Claude
Announcing Anthropic a supply chain risk might trigger broad certification necessities across defense contractors to make certain Claude isn’t always embedded in their workflows. Yet changing Claude gives instant technical challenges.
The Pentagon is increasing discussions with OpenAI and Google to bring their models into classified structures. Elon Musk’s xAI these days secured a contract to Grok into categorised settings. Moreover, sources familiar with the matter suggest Claude recently leads in application relevant to offensive cyber capabilities.
Google’s Gemini is viewed as a capability replacement, however any agreement would likely need terms much like those Anthropic rejected—particularly permitting use for “all lawful purposes.”
Legal and Industry Implications
Using the Defense Manufacturing Act in this adversarial context would be unsual. One defense consultant noted that Anthropic ought challenge such action in court, claiming that the DPA applies to commercially available goods intead of custom-constructed software formed for sensitive authorities systems.
The Pentagon’s decision carries broader implications for AI governance. As foundation models emerge as embedded in categorized infrastructure, the question shifts from model overall performance alone to policy alignment, operational control, and legal authority.











