Benjamin Minhao Chen, Alexander Stremitzer & Kevin Tobia
Harvard Journal of Law & Technology,
Volume 36, Number 1, pp. 127-169
Published in Fall 2022
Abstract: Should machines be judges? Some say “no,” arguing that citizens would see robot-led legal proceedings as procedurally unfair because the idea of “having your day in court” is thought to refer to having another human adjudicate one’s claims. Prior research established that people obey the law in part because they see it as procedurally just. The introduction of “robot judges” powered by artificial intelligence (“AI”) could undermine sentiments of justice and legal compliance if citizens intuitively view machine-adjudicated proceedings as less fair than the human-adjudicated status quo. Two original experiments show that ordinary people share this intuition: There is a perceived “human-AI fairness gap.” However, it is also possible to reduce — and perhaps even eliminate — this fairness gap through “algorithmic offsetting.” Affording litigants a hearing before an AI judge and enhancing the interpretability of AI decisions reduce the human-AI fairness gap. Moreover, the perceived procedural justice advantage of human over AI adjudication appears to be driven more by beliefs about the accuracy of the outcome and thoroughness of consideration, rather than doubts about whether a party had adequate opportunity to voice their opinions or whether the judge understood the perspective of the litigant. The results of the experiments can support a common and fundamental objection to robot judges: There is a concerning human-AI fairness gap. Yet, at the same time, the results also indicate that the public may not believe that human judges possess irreducible procedural fairness advantages. In some circumstances, people see a day in a robot court as no less fair than a day in a human court.