Major U.S. Protection contractors are starting to take away Anthropic’s Claude AI tools tools from their technology stacks after the Trump administration released a government-wide prohibition at the corporation systems. Legal scholars and contracting attorneys note that the directive may confront challenges in court, but companies relying on Pentagon contracts are moving rapidly to comply.
President Donald Trump declared the ban following a dispute among the Pentagon and Anthropic over guardrails governing using Claude in military programs. The order applies on a 6-month phase-out period for federal agencies and extends pressure to corporations that deliver technology or services to the U.S. military.
Defense Secretary Pete Hegseth intensified the move by warning that Anthropic might be targeted a national security chain risk . In a statement posted publicly, he wrote that “effective right away, no contractor, suppliers or partner that does business with the US army may also conduct any commercial activity” with the corporation.
The declaration brought about on the quick responses from contractors in search of to defend their access to federal funding.
Lockheed Martin Signals Compliance
Lockheed Martin demonstrated it’d follow the administration’s direction and remove the AI tools where necessary. “We will follow the president’s and the Department of War’s direction,” the corporation stated in a declaration. “We count on minimal affects,” including that it does not depend upon a single AI seller for any portion of its work.
Other major defense contractors—together with General Dynamics, RTX, and L3Harris—declined to comment on whether plan to comply with the directive. Industry lawyers say that hesitation in all likely reflects ongoing internal review of procurement and technology dependencies.
Legal Experts Question Authority Behind the Ban
While contractors seem ready to comply with the order, legal professionals claims the administration may also lack clear authority to restricts contractors from using of a technology vendor outside their government work.
Jason Workmaster, a government contracts attorney at Miller Chevalier, explained the directive as strangely aggressive. “If and when challenged, there could be a high likelihood that DOD could be found not to have the authority to do this, except there are facts that we do not recognize about,” he stated.
The Department of War could try to depend on its Supply Chain Risk Authority to justify the action. Moreover, that authority mainly applies whilst a applies when a vendor risks which includes sabotage, surveillance, or disruption of government systems.
Alan Rozenshtein, a University of Minnesota law professor who studies technology law, said the government has now not publicly showed that it followed the procedural steps needed for one of these designation. “Capitalism and free markets depend upon the rule of law,” Rozenshtein stated. “This is the opposite of that.”
Contractors Prioritize Federal Relationships
Despite the legal uncertainty, analysts say compliance is likely across the defense zone. Corporations with massive authorities contracts often reply fast to policy signals from Washington. Attorneys who advise defense companies say that maintaining access to the Pentagon’s trillion-dollar annual budget outweighs the risks of quickly losing a single AI provider.
Franklin Turner, a government contracts attorney, stated corporations are likely already adjusting their technology stacks. “Most corporations that do significant business with the authorities are hyper-aware to what the U.S. Government wants, and they’re probable already taking steps to cleanse their supply chains of Anthropic,” Turner stated.
Anthropic has demonstrated it’s going to challenge the ban in court setting the stage, for a legal battle that would shape how federal authorities alter commercial AI systems utilized in national security contexts.











