New Publication: Re-focusing explainability in medicine

Laura Arbelaez Ossa, Georg Starke, Giorgia Lorenzini, Julia E Vogt, David M Shaw1 and Bernice Simone Elger

Abstract

Using artificial intelligence to improve patient care is a cutting-edge methodology, but its implementation in clinical routine has
been limited due to significant concerns about understanding its behavior. One major barrier is the explainability dilemma and
how much explanation is required to use artificial intelligence safely in healthcare. A key issue is the lack of consensus on the
definition of explainability by experts, regulators, and healthcare professionals, resulting in a wide variety of terminology and
expectations. This paper aims to fill the gap by defining minimal explainability standards to serve the views and needs of
essential stakeholders in healthcare. In that sense, we propose to define minimal explainability criteria that can support doctors’
understanding, meet patients’ needs, and fulfill legal requirements. Therefore, explainability need not to be exhaustive but
sufficient for doctors and patients to comprehend the artificial intelligence models’ clinical implications and be integrated safely
into clinical practice. Thus, minimally acceptable standards for explainability are context-dependent and should respond to the
specific need and potential risks of each clinical scenario for a responsible and ethical implementation of artificial intelligence.

DOI: 10.1177/20552076221074488