Buckley, Ross P.; Zetzsche, Dirk A.; Arner, Douglas W.; Tang, Brian W.
Sydney Law Review, Vol. 43 Issue 1, pp. 43-81
Published in March 2021
Abstract: This article develops a framework for understanding and addressing the increasing role of artificial intelligence ('AI') in finance. It focuses on human responsibility as central to addressing the AI 'black box' problem -- that is, the risk of an AI producing undesirable results that are unrecognised or unanticipated due to people's difficulties in understanding the internal workings of an AI or as a result of the AI's independent operation outside human supervision or involvement. After mapping the various use cases of AI in finance and explaining its rapid development, we highlight the range of potential issues and regulatory challenges concerning financial services AI and the tools available to address them. We argue that the most effective regulatory approaches to addressing the role of AI in finance bring humans into the loop through personal responsibility regimes, thus eliminating the black box argument as a defence to responsibility and legal liability for AI operations and decisions.