Microsoft Backs Anthropic’s Right to Set AI Safety Limits as Pentagon Legal Battle Intensifies

by admin477351

Microsoft has publicly backed Anthropic’s right to set ethical boundaries on how its AI is used by the military, filing a supporting legal brief in a federal court in San Francisco as the company’s battle against the Pentagon intensifies. The filing urged the court to grant a temporary restraining order against the Defense Department’s controversial supply-chain risk designation. Amazon, Google, Apple, and OpenAI have also filed supporting briefs, underlining the technology industry’s unanimous concern about the precedent this case could set.

Anthropic triggered the Pentagon’s ire when it refused to allow its Claude AI to be used for mass surveillance of US citizens or to power autonomous lethal weapons during a $200 million contract negotiation. Defense Secretary Pete Hegseth responded by branding the company a supply-chain risk, a move that immediately began to disrupt Anthropic’s government business. The Pentagon’s technology chief later confirmed there was no chance the agency would reconsider its position.

Microsoft’s brief is particularly powerful given that the company directly uses Anthropic’s AI in systems it supplies to the US military and is one of the Pentagon’s most important technology partners. As a participant in the $9 billion Joint Warfighting Cloud Capability contract and holder of additional federal agreements, Microsoft has significant interests tied to this case. The company urged a collaborative framework in which the government and tech industry jointly define responsible standards for AI use in national security.

Anthropic’s simultaneous lawsuits in California and Washington DC argued that the supply-chain risk designation, normally applied to companies linked to foreign adversaries, was being used as a political weapon against a US company for its public AI safety stance. The company’s court filings acknowledged that it does not currently believe Claude is safe for autonomous lethal decision-making, which it said was the foundation of its contract requirements. Anthropic argued this uncertainty was precisely why ethical guardrails were necessary.

Congressional Democrats have separately raised alarms about AI in military targeting, writing to the Pentagon to ask whether AI tools were involved in a strike in Iran that reportedly killed over 175 civilians at an elementary school. The questions being raised in Congress echo those at the heart of Anthropic’s lawsuit and collectively point to a critical gap in US policy on AI in warfare. Microsoft’s visible support for Anthropic suggests that much of the technology industry is ready to engage on these questions openly and forcefully.

You may also like