Friday, November 10, 2023

Calvin Ho on Generative AI and the Foregrounding of Epistemic Injustice in Bioethics (The American Journal of Bioethics)

"Generative AI and the Foregrounding of Epistemic Injustice in Bioethics"
Calvin Ho
The American Journal of Bioethics
Volume 23, 2023 - Issue 10
Published online: October 2023
Introduction: OpenAI’s Chat Generative Pre-training Transformer (ChatGPT), Google’s Bard and other generative artificial intelligence (GenAI) technologies can greatly enhance the capability of healthcare professionals to interpret data across different data sources and locations with a simple query, as well as advance medical research through its ability to generate synthetic data (The Lancet Regional Health-Europe 2023). However, the performance of these technologies depends on the data they are trained on. Existing data may be seriously biased due to a lack of gender, ethnic, racial, social and/or religious diversity, and is a concern that the Global Alliance for Genomics & Health (2023) seeks to address in a recent initiative to promote global diversity in datasets within genomic research. If used in clinical medicine, the results from GenAI technologies present serious normative challenges that Cohen (2023) has clearly and succinctly set out, quite aside from the direct impact that they could have on human health and wellbeing.
    While it should come as no surprise to anyone that emerging health technologies tend to present normative and regulatory challenges, many of the “new-ish” problems that are anticipated to arise from the use of GenAI technologies in healthcare and research foreground intransigent concerns with epistemic injustice. I provide three reasons why GenAI’s clinical use is a big deal in bioethics. First, it highlights that bioethics does not adequately account for the impact that power dynamics and system biases have in knowledge production and dissemination. Marginalized individuals and communities still lack the capability to participate…Click here to read the full text


No comments:

Post a Comment