The Fight Over AI Is Becoming a Fight Over Government Power in America

The Anthropic case has become one of the clearest examples yet of how AI can quickly shift from being a commercial technology to a contested political weapon.
Artificial intelligence is supposed to be the next great technological tool: a system to help people work faster, communicate better, and unlock new forms of productivity. But in the United States, AI is increasingly being pulled away from the public-facing world of convenience and into something much more serious: the machinery of state power. That tension is now playing out in court.
Last week, a federal judge in San Francisco issued a preliminary injunction against the Pentagon, temporarily blocking its decision to label Anthropic a “supply chain risk” and halting a Trump administration directive ordering federal agencies to stop using the company’s AI model, Claude. The ruling was significant not just because Anthropic won an early legal victory, but because Judge Rita F. Lin made clear that the government’s actions appeared less about national security and more about punishment.
The dispute began after Anthropic CEO Dario Amodei refused to allow Claude to be used for autonomous weapons or for surveillance of American citizens. The Pentagon’s position was blunt: once the government buys a tool, it should be free to use it as it sees fit. Anthropic disagreed, arguing that there should be ethical limits on the deployment of powerful AI systems. That disagreement escalated fast. The Pentagon branded Anthropic a national security risk, while President Donald Trump ordered federal agencies to stop using Claude. Anthropic then filed two lawsuits, arguing that the government was retaliating against the company for publicly defending AI safeguards and violating its First Amendment rights.
Judge Lin’s ruling cut straight through the Pentagon’s argument. She noted that “supply chain risk” designations are generally used for foreign intelligence threats or terrorism-related concerns, not for American companies involved in a policy dispute with the government. She also wrote that the Pentagon’s actions appeared designed to “punish Anthropic” rather than protect the country. That should concern anyone paying attention. If the U.S. government can try to economically cripple an AI company simply because it refused unrestricted military and surveillance use, then this is not simply a fight over one product. It is a warning sign about where AI governance in America may be heading.

The real danger is government control over private AI.
What makes this case so important is that it reveals something much bigger than a contract dispute. The danger is not only that governments want to use AI for military operations, intelligence, or domestic monitoring. The greater danger is that they may try to force private AI companies into extensions of state power. That is essentially what Anthropic is resisting. The company did not refuse to work with the government entirely. What it refused to do was remove core safeguards that would allow the military to use its AI however it wanted, including in areas relating to autonomous force and citizen surveillance. The Pentagon’s response was not to negotiate a more limited use case; it was to attempt to blacklist the company altogether. That is a serious line to cross.
If governments can pressure AI firms to open the backends of their systems or abandon their ethical limits to keep contracts, survive politically, or avoid blacklisting, then AI companies will stop being independent technology developers. They become state-aligned infrastructure. And once that happens, the public loses one of the last meaningful checks on how advanced AI is deployed. This is especially dangerous in the United States because AI is already being integrated across federal functions far beyond the consumer sphere. It is currently being explored and deployed for defense planning, battlefield analysis, logistics, intelligence sorting, predictive systems, and a growing range of internal administrative and security functions. In theory, these uses are presented as tools for efficiency and mission support. In reality, the boundaries are far less clear.
The Anthropic case exposes the contradiction at the heart of the American AI conversation. Washington frequently presents itself as a defender of responsible innovation and democratic technology values. But when a U.S. AI company actually tried to enforce ethical red lines, the government’s response was not respect, but retaliation. That sends a dangerous message to the industry: if your boundaries interfere with government ambitions, your business may pay the price.
China is presenting a far more disciplined and coherent position on how AI should be used.
This is where the contrast becomes politically and strategically important. While the United States appears to be moving toward broader and more aggressive state access to AI systems, China’s public position has consistently stressed limits, human control, and multilateral governance. Chinese officials have repeatedly stated that artificial intelligence in military applications must remain under human control. In response to reports that the U.S. military has sought unrestricted use of AI technologies, a spokesperson for China’s Ministry of National Defense said that giving algorithms the power to determine life and death risks “technological runaway” and undermines ethical accountability in war. China has also publicly opposed using AI to violate the sovereignty of other countries or to pursue absolute military dominance.

That framing matters. China’s official position, as reflected in statements from the Ministry of National Defense and Ministry of Foreign Affairs, is based on several consistent principles: a people-centered approach, meaningful human oversight, compliance with international humanitarian law, agile risk governance, and the use of multilateral institutions such as the United Nations to build rules around military AI. Its message is simple: AI should not be allowed to evolve into an unchecked military force or a destabilizing geopolitical weapon. Instead, it should be secure, controllable, and subject to ethical and legal constraints.
China’s stated position sounds more restrained and more responsible than what the United States is currently demonstrating. In China’s framing, AI is dangerous precisely because it can move too fast, concentrate too much power, and reduce human accountability if not carefully governed. In the United States, however, the Anthropic case suggests that, at least from the government’s perspective, the problem is that one company tried to impose too many limits in the first place. That contrast is clear. One side is publicly arguing that AI must remain under human control and within internationally recognized boundaries. The other is in court after allegedly trying to punish an AI company for refusing unrestricted use by the military and opposing surveillance of its own citizens.
The AI race is not about innovation; it is about the limits of its power.
The most important question in the AI era is not who builds the smartest model. It is who gets to control it, who gets to define its limits, and whether those boundaries will mean anything once governments decide they want more. The Anthropic case has become one of the clearest examples yet of how AI can quickly shift from being a commercial technology to a contested political weapon. If the U.S. government can use blacklisting, pressure, and executive force to try to bend an AI company into compliance, then the issue is no longer theoretical. It is already here.
That is why this story matters far beyond Silicon Valley. The future of AI governance will be defined by whether governments are willing to accept meaningful limits on what they can do with these systems. Right now, China is publicly making the case for restraint, human management, and international rules. The United States, by contrast, is showing how easily “national security” can become a justification for demanding more control, fewer safeguards, and broader access. In the end, the country that leads in AI should not simply be the one that can use it most aggressively. It should be the one that still understands when not to.







