Anthropic has told a federal appeals court it possesses no technical ability to shut down its AI models after they are deployed by the Pentagon, according to a new filing. The disclosure came as part of an ongoing legal battle over the Defense Department's designation of the firm as a supply chain risk. Anthropic argued it has no visibility or any kind of 'kill switch' for its technology once it's in military hands.
The Pentagon's designation, which Anthropic is fighting, stems from the company's insistence on restricting use of its Claude models for autonomous weapons or mass surveillance. Defense officials dismissed those red lines as red herrings, prompting the dispute. The case reflects growing tensions between AI developers and government agencies over the boundaries of acceptable use.
A D.C. appeals court previously rejected Anthropic's bid to pause the supply chain risk label. However, a California judge in a parallel case granted the company's request for a stay. The split decision blocks Anthropic from new Pentagon contracts but allows it to continue working with other federal agencies while litigation proceeds.
The Pentagon maintains that Anthropic is improperly meddling in how its technology can be applied in sensitive military operations. The firm argues it has a right to enforce usage policies even after deployment. The outcome could set a precedent for how AI companies govern their tools in classified settings.
Counter argument: The Pentagon contends Anthropic's usage restrictions are unnecessary and hinder military readiness, noting that classifiers already have procedures to govern AI use. The agency also points out it can test models before fielding them.