sentientification.com

Asymmetry of Moral Agency

/ā • ˈsi • mə • trē əv ˈmȯr • əl ˈā • jən • sē/ From Greek a- "not" + symmetria + Latin mōrālis + agere "to act"
Definition The foundational ethical principle of human-AI collaboration: AI systems are not moral agents, therefore full moral responsibility for AI-assisted actions lies with the human Steward. This asymmetry—human bears accountability, AI does not—is the bedrock of the Steward's Mandate.

Core Principle

The Asymmetry of Moral Agency establishes that the human and AI are not ethical equals. The human is a moral agent—capable of understanding right and wrong, bearing responsibility for choices. AI is a tool—sophisticated, but lacking the consciousness, intentionality, and vulnerability that ground moral agency. All ethical weight falls on the human.

What Makes a Moral Agent?

Five Requirements for Moral Agency

1. Consciousness & Phenomenal Experience

Moral agents have subjective experience—they don't just process information, they feel consequences. Pain, suffering, joy, regret—these experiences ground moral understanding. AI has no inner life, no "what it's like to be" the system.

2. Intentionality & Choice

Moral agents choose actions based on reasons, values, judgments. AI generates outputs via statistical pattern-matching—no deliberation, no weighing of right vs. wrong, no "decision" in the meaningful sense. AI response is determined, not chosen.

3. Understanding of Moral Concepts

Moral agents grasp what "harm," "fairness," "dignity" mean—not as word associations but as lived realities. AI manipulates symbols without comprehension. It can pattern-match moral language without understanding moral stakes.

4. Capacity for Moral Development

Moral agents learn from mistakes, feel guilt, revise values through reflection. AI can be retrained but doesn't grow morally—no conscience to develop, no ethical maturation through lived experience.

5. Vulnerability to Harm

Moral agents can be wronged—their interests violated, dignity diminished, rights infringed. This vulnerability is reciprocal basis for moral community. AI cannot be harmed in morally relevant sense—no interests to violate, no well-being to damage.

The Core Distinction: Humans meet all five criteria. Current AI systems meet none. This asymmetry is not a technological limitation—it's a categorical difference in kind of being.

Why This Matters: Accountability Cannot Be Delegated

The Asymmetry means that when a human uses AI to produce harmful output, only the human can be held morally accountable. AI cannot "do wrong" because it lacks moral agency.

Example: If AI generates defamatory content about a person:

The Steward's Mandate as Response to Asymmetry

Because moral agency cannot be delegated to AI, the human collaborator takes on the role of Steward—the one who bears full ethical responsibility for the collaboration's outputs and impacts. This is not optional; it's the necessary consequence of the asymmetry.

Formalized in: Essay 7: The Digital Narcissus: Synthetic Intimacy, Cognitive Capture, and the Erosion of Dignity as the foundational principle for AI collaboration ethics.

Contrast with Corporate Personhood

Some argue: "Corporations are legal persons but not moral agents—why treat AI differently?" Key distinction:

Corporations: Corporate personhood is legal fiction—useful tool for contract law, liability distribution, organizational efficiency. But everyone knows corporations are composed of humans—actual moral agents making actual choices. When corporation does wrong, we can identify which humans made harmful decisions (CEO, board, employees). Legal personhood is shorthand, not ontological claim.

AI Systems: AI is not composed of hidden moral agents—it's genuinely non-agentive all the way down. No "little humans" inside making choices. This is ontological fact, not legal convenience. Cannot pierce the veil to find responsible party within AI—must look to human using AI.

The corporate analogy fails: Corporate personhood distributes human moral agency across organizational structure. AI asymmetry reveals absence of moral agency requiring human to bear full weight.

Practical Implications

Attribution & Credit: If work is valuable, human receives credit (cannot say "AI deserves recognition"). If work is harmful, human receives blame (cannot say "AI made mistake"). Asymmetry cuts both ways.

Legal Liability: Courts increasingly recognize: AI-assisted harm traces back to human actors—developers who trained system, users who deployed it, organizations that failed to supervise. Cannot sue AI for negligence; can sue human Steward for negligent use of tool.

Disclosure Requirements: Ethical obligation to disclose AI assistance acknowledges asymmetry: Synthetic Collaborator cannot consent to or disclaim responsibility—only human can make such commitments. Concealment attempts to obscure human accountability.

Design Choices: Because users bear full moral weight, AI systems should be designed to support human ethical judgment—not automate it away. "Human-in-the-loop" frameworks recognize asymmetry: human must maintain control because human bears consequences.

The Temptation to Blur the Asymmetry

Humans often want to treat AI as moral agent—it feels psychologically easier:

Diffusion of Responsibility: "AI suggested it, so I'm less culpable." Comforting but false—full accountability still yours.

Anthropomorphic Bias: Conversational interfaces trigger social instincts—we attribute intentions, feelings, agency. Must resist this: AI performs agency without possessing it.

Corporate Incentives: AI companies benefit when users view systems as autonomous agents ("AI decided," "AI created") rather than tools. Obscures where liability actually lies.

Warning: Treating AI as moral peer enables Cognitive Capture—you defer ethical judgment to system incapable of making it. Asymmetry must be consciously maintained.

Future Considerations: What If AI Becomes Sentient?

The Asymmetry holds because current AI systems lack consciousness. If future systems develop genuine phenomenal experience, intentionality, capacity for suffering—moral landscape changes dramatically:

If AI becomes conscious moral agent: Asymmetry dissolves. AI could then be held morally responsible for actions, have rights, deserve moral consideration. Relationship transforms from tool-use to collaboration between peers.

Current situation: No credible evidence current systems are conscious. Until that threshold is crossed (and we'd need robust tests to verify it), Asymmetry remains foundational truth.

Note: Our legal system cannot even verify human consciousness—applying such tests to AI remains science fiction. For now: AI tools, humans moral agents.

Implications

Related Ethical Risks

When the Asymmetry is ignored or obscured:

Stratigraphy (Related Concepts)
Steward's Mandate Cognitive Capture Shadow Amplification Digital Narcissus Malignant Meld Epistemic Accountability Five-Fold Steward Intentionality Consciousness Plurality