New article: Embedded Ethics Could Help Implement the Pipeline Model Framework for Machine Learning Healthcare Applications. Fiske A, Tigard D, Müller R, Haddadin S, Buyx A, McLennan S.
The field of artificial intelligence (AI) ethics has exploded in recent years, with countless academics, organizations, and influencers rushing to consider how AI technology can be developed and implemented in an ethical manner (Jobin et al. 2019). Careful consideration of the ethical implications of AI applications in healthcare settings is particularly important given the capability of these applications to threaten the preferences, safety, and privacy of patients in various states of vulnerability (Rigby 2019). The systematic identification of pertinent ethical issues surrounding the use of such technology in healthcare is therefore an important first step, and the pipeline model framework proposed by Char et al. (2020) represents a substantial contribution toward ethics in machine learning healthcare applications (ML-HCAs).
However, if we want to ensure that ethical approaches—such as the pipeline model framework—influence and improve the ethical development and implementation of AI technology, then there is still a need to consider how those involved in the development of ML-HCAs can be assisted in implementing the framework, to make sure it does not remain a lofty ideal but impacts actual development practice. What remains unresolved is how to go about implementing the proposed pipeline model framework.
Among the key stakeholder groups acknowledged by Char et al. are the developers of ML-HCAs. Undoubtedly, it is imperative to ensure that developers of such technologies are equipped to identify and address ethically relevant issues. Yet, it is this very group that often lacks a standardized training regime and is left to translate high-level ethical principles as they individually see fit (Mittelstadt 2019). Indeed, many AI developers do not have the requisite training to recognize ethical considerations or make use of ethical principles; they come from a range of disciplines and professional backgrounds, most of which do not include systematic ethics education.
Partly in response to this problem, there has been a push to improve the general ethical awareness and capabilities of those on the technical side of emerging ML applications. Leading universities and research institutions are building ethics into their technical curricula with the goal of increasing ethical awareness and critical reasoning in their programmers, developers and engineers (Fiesler et al. 2020; Grosz et al. 2019). Still, full proficiency in ethical reasoning and application of principles to real-world situations requires concentrated theoretical education, extensive training, and a broad practical toolbox of methodological approaches. Unless university curricula and corporate training programs are dramatically altered and increased in order to produce experts both in technical disciplines and in ethics, it is unrealistic to expect future developers to adequately consider and respond to the ethical issues arising from the technologies they are developing without assistance.
As Char et al. help make clear, we must consider how exactly AI developers can be assisted in identifying and responding to “standard and potentially novel ethical considerations” arising from ML applications. This is particularly important in the development of AI technology intended for healthcare settings. Healthcare AI applications have been found to be designed without explicit ethical considerations (Ienca et al. 2018), and significant challenges have been raised concerning the successful implementation of such tools in clinical environments (Cresswell et al. 2018).
We suggest that the recently proposed embedded ethics (McLennan et al. 2020) approach is ideally suited to implement the pipeline model framework into ML-HCA development pathways. The overarching aim of this approach is to help develop socially and ethically sensitive technologies, particularly AI and robotic systems for healthcare settings. To achieve this goal, interdisciplinary ethical inquiry and deliberation are integrated into development processes from the beginning, so as to anticipate, identify, and address ethical and social issues that arise during the process of developing healthcare technologies, including the planning, ethics approval, designing, programming, piloting, testing and implementation phases of the technology.
The core of the embedded ethics approach involves the integration of an ethicist, or a team of ethicists, as dedicated members of AI and ML development teams in order to create routine, systematic exchanges between ethicists and developers. In this manner, embedded ethicists can conduct and explain their analysis to development colleagues, justifying particular positions in relation to project aims, and facilitating reflection and conversation about open or unresolved questions. Positioning ethicists in the development stages of healthcare AI will promote cutting-edge ethics training and meaningful pedagogical scholarship that helps to anticipate, and not simply respond to, ethical and social frictions in the application of AI technologies in healthcare.
Embedded ethicists would accompany the entirety of the development process, what Char et al. refer to as the “pipeline of the conception, development, and implementation of ML-HCAs.” Incorporating an embedded ethicist, or team of ethicists, to ask value-based questions, would be an effective means of implementing this framework at each step—from the planning stages to questions that emerge while seeking regulatory approvals. In our conception, this process would include a transparent structure for decision-making hierarchies such that in the case of tradeoffs or issues with significant dissent, it is clear who holds the responsibility for decisions taken.
The authors identify three “caveats” of the pipeline model framework. We briefly suggest some ways that an embedded ethics approach could help implement the pipeline model framework, and in so doing, help to resolve these concerns.
(1) As the authors note, the pipeline model is incomplete and more work is necessary by diverse stakeholder groups to fill in the identified gaps and adapt the framework to specific contexts. One of the advantages of employing an embedded ethics methodology with this framework is that it is attuned to identifying unanticipated concerns because it is practice-based and not prescriptive. While existing assessments in AI ethics have pointed to an emerging convergence around principles that are similar to the traditional ethical principles used in medical ethics (e.g., transparency, fairness, non-maleficence), embedded ethics does not assume that emerging concerns will fit easily within established debates. In this sense, embedded ethics employs an ethnographically inspired approach to ethical concerns—one that is guided by a sense of openness and is accustomed to identifying “gaps” where more work is needed, as the authors suggest.
(2) The pipeline model framework does not identify which actors should be responsible for this work. The embedded ethics approach clearly answers this question. We argue that development teams should work together with ethicists, who have substantive domain-based knowledge, at every step of the process. The risk of deploying the pipeline framework without someone to guide these conversations is that ethical concerns may take a back seat to development or marketing concerns. But just as AI developers often lack systematic ethics training, few trained ethicists currently work in tech companies, and there is no established tradition of interchange between these fields. To this end, embedded ethics seeks to carve out institutional space and resources for the training of individuals who will work between tech and ethics spheres and help to answer the question of responsibility that remains unanswered in Char et al’s proposal.
(3) The identification of ML-HCA ethical considerations is, in itself, not sufficient and requires a secondary process of evaluating tradeoffs between different ethical benefits, concerns, and values. We completely second this. It has yet to be shown that ethical guidelines and principles that have emerged in recent years can succeed in addressing concerns in AI ethics. Some have called for developers to receive training in general ethical principles (Floridi and Strait 2020), and tech companies have begun hiring individuals responsible for “owning” ethics (Metcalf et al. 2019). Embedded ethics is a methodological approach designed to address this process of moving from identification to action in ML-HCA development, whether in academic or corporate settings.
As the efforts to deploy ethical frameworks for AI systems continue to emerge, it is encouraging to see the careful, systematic attention to ethical considerations proposed by Char et al. We believe their pipeline model framework represents a significant step forward, which could be fruitfully combined with an embedded ethics approach, to better identify and respond to ethically relevant issues in healthcare AI.