Anthropic’s widening conflict with the U.S. Department of Defense has not cut off Claude for mainstream enterprise users. Microsoft, Google, and Amazon have all indicated that Anthropic’s AI models remain available for commercial and civilian workloads, even after the Department of Defense designated Anthropic a supply-chain risk.
That distinction matters because the DoD action is narrower than a blanket platform-wide ban, but broader than a restriction on only direct military deployments. Reporting and company statements indicate the designation affects the use of Claude in Department of Defense contracts and in contractor supply chains tied to those contracts, while leaving non-DoD and non-defense customer access in place for now.
TL;DR
- The Department of Defense designated Anthropic a supply-chain risk.
- Microsoft, Google, and Amazon have said Claude remains available for non-defense customers and workloads.
- The implications extend to defense contractors and supply chains connected to DoD contracts, not just direct military use.
- Anthropic plans to challenge the designation in court, but the final legal outcome remains uncertain.
- The clash is especially notable because Claude had already been used in national security and military-related operations before the dispute escalated.
The Pentagon’s designation marks a major escalation in Anthropic’s dispute with defense officials. Reuters reported that the measure took effect immediately and restricts use of Anthropic technology in Department of Defense contracts. Legal experts cited by Reuters also said defense contractors would be expected to remove Anthropic tools from relevant supply chains connected to DoD work, even though uses outside those contract pathways may still continue.
That contractor angle is critical. This is not simply a question of whether Claude can be used by the military itself. The practical effect reaches companies working on DoD-linked contracts, which means the designation can influence procurement, integrations, and vendor relationships across defense contracting networks. At the same time, Anthropic has argued that the designation cannot lawfully block unrelated commercial use by the same customers outside specific DoD contract work.
Microsoft was the first major platform company to publicly calm customer concerns. TechCrunch reported that Microsoft’s legal team concluded Anthropic products, including Claude, can remain available to customers other than the Department of Defense through services such as Microsoft 365, GitHub, and AI Foundry. Google gave a similar signal, saying Anthropic’s products remain available through Google Cloud for non-defense projects, while TechCrunch reported that AWS customers and partners can continue using Claude for non-defense workloads.
Topics For More Insights
The dispute is more complicated because Anthropic was not previously absent from national security work. In a February 26 statement, CEO Dario Amodei said Claude had already been extensively deployed across the Department of Defense and other national security agencies for intelligence analysis, modeling and simulation, operational planning, cyber operations, and related mission-critical applications. Reuters also reported that Claude had been used in military operations before the designation, adding to the stakes of the current clash.
Anthropic says it will challenge the designation in court, and the company has argued the move is legally unsound. But the eventual outcome is still unclear. Courts could narrow, uphold, or otherwise reshape how the designation applies in practice, and that uncertainty means the long-term impact on Anthropic’s defense business, contractor relationships, and broader enterprise momentum is still unresolved.
For now, the clearest reading is that Claude remains available to non-defense customers across major cloud and software platforms, while defense-linked use faces immediate pressure and a potentially prolonged legal fight.

Join The Discussion