Defending Human Rights in the Era of AI

China proposed the Global AI Governance Initiative, articulating ‘people-centered, AI for good,’ emphasizing opposition to using AI to interfere in other countries’ internal affairs, and ensuring that AI always remains under human control.
Anthropic, an American AI company, was designated a “supply chain risk” by the Pentagon and banned by the Trump administration because it refused to allow its AI product, Claude, to be used in fully autonomous lethal weapons or mass domestic surveillance. On March 26, 2026, a federal district court in California ruled the government’s actions exceeded its authority and granted a preliminary injunction. The judge described the government’s conduct as “troubling” and “threaten to cripple” the company.
An AI company was blacklisted by its own government for refusing to kill. This is where the problem begins: when a company tries to uphold a moral bottom line, the government of the country that claims to “lead in AI global governance” chooses not dialogue but a ban.
This incident exposes a fundamental issue: in a healthy social governance system, it is the state that should guard the moral bottom line.
The state possesses powers that corporations do not: legislative authority, regulatory oversight, and coercive force. It bears responsibilities that corporations are not required to shoulder: upholding fairness and justice, protecting citizens’ rights, and safeguarding human dignity. When the profit-seeking nature of capital threatens to spiral out of control, it is the state that draws an inviolable line through laws and institutions.
Yet in American AI governance, we see a reversal of roles. Corporations try to uphold the bottom line, while the government crosses it. The company refused to allow its technology to be used for autonomous killing, and the U.S. government punished it in return. When a state can wield its power so arbitrarily to punish a company standing by its ethical principles, what is called “AI governance” has devolved into a capricious exercise of power.
Where is the red line for lethal autonomous weapons systems
The company’s refusal touches on the most sensitive issue in global AI governance: lethal autonomous weapons systems. There is no internationally accepted definition, but the prevailing understanding refers to weapons systems capable of selecting and attacking targets without human intervention.
Such systems are widely regarded as red line, for three reasons. First, the accountability vacuum: when a machine kills autonomously, who is responsible? The programmer? The commander? The algorithm? Second, the alarmin g risk of loss of control: once activated or hacked, the consequences are catastrophic. Third and most fundamentally, entrusting life and death to algorithms violates human dignity.

International consensus is accelerating. UN Secretary-General António Guterres has repeatedly warned that lethal autonomous weapons systems are “politically unacceptable, morally repugnant,” emphasizing that “machines that have the power and discretion to take human lives without human control should be prohibited by international law.” In December 2024, the UN General Assembly adopted a resolution on lethal autonomous weapons systems by 166 votes to 3, demonstrating overwhelming recognition of the threat.
Yet the United States remains ambiguous. It has neither signed any international convention banning autonomous weapons nor enacted domestic legislation restricting their development. While loudly proclaiming “AI ethics,” it is accelerating AI weaponization. According to The Wall Street Journal and other news outlets, Claude has been deeply embedded in the U.S. military’s decision-making for operations against Iran, used for intelligence assessment, target prioritization, and strike simulation. AI has substantively participated in “whom to kill, when to kill, and how to kill.”
The key to AI governance is the governance with human rights at center
Let us return to the question at the beginning: why is the AI that refuses to kill being banned by the U.S. government?
The answer is clear: in American AI governance, technological hegemony trumps ethical boundaries, and military superiority trumps human dignity. When a corporation tries to hold the line that “AI should not kill,” it crosses the “interest red line” of national strategy. Punishment follows.
True AI governance must be grounded in human rights: the primacy of the right to life, meaning no AI application should come at the cost of human life; the inviolability of human control, meaning lethal decisions must remain in human hands; transparency and accountability, meaning AI decision-making must be traceable, reviewable, and accountable. Defining and defending these boundaries is the state’s responsibility, not the corporation’s.
In contrast, China has provided a clear answer. In 2023, China proposed the Global AI Governance Initiative, articulating “people-centered, AI for good,” emphasizing opposition to using AI to interfere in other countries’ internal affairs, and ensuring that AI always remains under human control. China has actively promoted the establishment of a global AI governance mechanism within the UN framework, supporting the development of relevant international rules preventing AI military application in armed conflict.
Shortly after the Anthropic case captured global attention, China facilitated the launch of the World Data Organization in Beijing, emphasizing extensive consultation, joint contribution, and shared benefits.
AI can plan trips, verify facts, and impart knowledge. But it should never learn to kill. This is not a bottom line for companies to defend through litigation; it is a mission that should be voluntarily undertaken by every responsible government.
The article reflects the author’s opinions, and not necessarily the views of China Focus.

