/ Research, Studies / Nadine Andrea Felber
New article: "Intentional machines: A defence of trust in medical artificial intelligence", by Georg Starke, Rik van den Brule, Bernice Simone Elger, and Pim Haselager
Trust constitutes a fundamental strategy to deal with risks and uncertainty in complex societies. In line with the vast literature stressing the importance of trust in doctor–patient relationships, trust is therefore regularly suggested as a way of dealing with the risks of medical artificial intelligence (AI). Yet, this approach has come under charge from different angles. At least two lines of thought can be distinguished: (1) that trusting AI is conceptually confused, that is, that we cannot trust AI; and (2) that it is also dangerous, that is, that we should not trust AI—particularly if the stakes are as high as they routinely are in medicine. In this paper, we aim to defend a notion of trust in the context of medical AI against both charges. To do so, we highlight the technically mediated intentions manifest in AI systems, rendering trust a conceptually plausible stance for dealing with them. Based on literature from human–robot interactions, psychology and sociology, we then propose a novel model to analyse notions of trust, distinguishing between three aspects: reliability, competence, and intentions. We discuss each aspect and make suggestions regarding how medical AI may become worthy of our trust.