The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models - Science and Engineering Ethics
Recent developments in artificial intelligence (AI) and machine learning, such as deep learning, has the potential to make medical decision-making more efficient and accurate. Deep learning technologies can improve how medical doctors gather and analyze patient data as a part of diagnostic procedures, prognoses and predictions, treatments, and prevention of disease (Becker, 2019; Ienca & Ignatiadis, 2020; Topol, 2019a, 2019b). However, applied artificial intelligence raises numerous ethical problems, such as the severe risk of error and bias (Ienca & Ignatiadis, 2020, p. 82; Marcus & Davis, 2019), lack of transparency (Müller, 2020), and disruption of accountability (De Laat, 2018). Describing the ethical challenges and concerns has so far been the main focus of the increasing research literature in general AI ethics (Müller, 2020) and ethics of medical AI (e.g., Char et al., 2018, 2020; Grote & Berens, 2019; McDougall, 2019; Vayena et al., 2018). Furthermore, if clinicians' decisions are to be substantially assisted, or even replaced by AI and machine learning, shared decision-making--a central ethical ideal in medicine that protects patient autonomy by letting patients make informed choices about their healthcare in line with their values--is challenged.
Apr-5-2022, 15:30:48 GMT