Explainability for artificial intelligence in healthcare: A multidisciplinary perspective

Artificial intelligence (AI) is increasingly making its way into many aspects of health informatics, backed by a vision of improving healthcare outcomes and lowering healthcare costs. However, like many other such technologies, it comes with legal, ethical, and societal questions that deserve further exploration. Amann et al. do just that in this 2020 paper published in BMC Medical Informatics and Decision Making, examining the concept of “explainability,” or why the AI came to the conclusion that it did in its task. The authors provide a brief amount of background before then examining AI-based clinical decision support systems (CDDSs) to provide various perspectives on the value of explainability in AI. They examine what explainability is from the perspective of technological perspective, then examine the legal, medical, and patient perspectives of explainability’s importance. Finally, they examine the ethical implications of explainability using Beauchamp and Childress’ Principles of Biomedical Ethics. The authors conclude “that omitting explainability in CDDSs poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health.”

Please to read the entire article.

You must be logged in to post a comment Login