Shilun Zhou (PhD Candidate)
International Journal for the Semiotics of Law
Published online: November 2025
Abstract: This article deconstructs the legal semiotic of “Responsible AI” through the lens of virtue jurisprudence, addressing ethical dilemmas in technology-driven knowledge creation within the humanities. It critiques the misleading anthropomorphisation of AI, arguing that “Responsible AI” should be understood as “responsible in name only” and “accountable in reality”. By distinguishing between moral agency and legal accountability, it highlights AI’s dual legal attributes, including its anthropomorphic intelligent dimension and its distinct artificial nature. While the terms of reliability and AI could be semantically related at first glance, the virtue jurisprudence approach could distinguish the semiotic implications of “responsible AI” and “accountable AI”, by highlighting humans’ unique moral assessment capacity, which AI lacks, making AI accountable but not responsible. Emphasising such moral capacity not only justifies human’s refusal to be treated like machines but also provides a theoretical basis for a human-centred AI framework and guides the development of accountability AI in current legal practice. By examining the interplay between human virtue and technological systems, it calls for a renewed focus on human-centric ethical principles in the age of AI-driven knowledge production.

No comments:
Post a Comment