Benjamin Chen, Alexander Stremitzer, and Kevin Tobia
SSRN & the University of California, Los Angeles School of Law, Public Law & Legal Theory Research Paper Series.
Featured on the Legal Theory Blog
Published in May 2021
Published in May 2021
Abstract: Should machines be judges? Some balk at this possibility, holding that ordinary citizens would see a robot-led legal proceeding as procedurally unfair: To have your “day in court” is to have a human hear and adjudicate your claims. Two original experiments assess whether laypeople share this intuition. We discover that laypeople do, in fact, see human judges as fairer than artificially intelligent (“AI”) robot judges: All else equal, there is a perceived human-AI “fairness gap.” However, it is also possible to eliminate the fairness gap. The perceived advantage of human judges over AI judges is related to perceptions of accuracy and comprehensiveness of the decision, rather than “softer” and more distinctively human factors. Moreover, the study reveals that laypeople are amenable to “algorithm offsetting.” Adding an AI hearing and increasing the AI interpretability reduces the perceived human-AI fairness gap. Ultimately, the results support a common challenge to robot judges: there is a concerning human-AI fairness gap. Yet, the results also indicate that the strongest version of this challenge — human judges have inimitable procedural fairness advantages — is not reflected in the views of laypeople. In some circumstances, people see a day in robot court as no less fair than day in human court.
No comments:
Post a Comment