Transparency in Healthcare AI: Testing European Regulatory Provisions against Users' Transparency Needs

Spagnolli, Anna, Tolomini, Cecilia, Beretta, Elisa, Sarra, Claudio

arXiv.org Artificial Intelligence 

Human Inspired Technologies Research Centre, Università degli Studi di Padova Abstract Artificial Intelligence ( AI) plays an essential role in healthcare and is pervasively incorporated into medical software and equipment . In the European Union, healthcare is a high - risk application domain for AI, and providers must prepare Instructions f or Use ( IFU) according to the European regulation 2024/1689 (AI Act) . To this regulation, t he principle of transparency is cardinal and requires the IFU to be clear and relevant to the users. This study test s whether these latter requirements are satisfied by the IFU structure . A survey was administered online via the Qualtrics platform to four types of direct stakeholders, i.e., managers (N = 238), healthcare professionals (N = 115), patients (N = 229), and Information Technology experts (N = 230). T he participants rate d the relevance of a set of transparency need s and indicated the IFU section addressing them . The results reveal differentiated priorities across stakeholders and a troubled mapping of transparency needs onto the IFU structure . Recommendations to build a locally meaningful IFU are derived. Keywords: transparency, AI A ct, healthcare, user - centeredness 1. Introduction The software called Artificial Intelligence is the object of recent regulation s and guidelines such as the European Union AI Act (Artificial Intelligence Act, 2024), the US AI Risk Management Framework (NIST USA, n.d.), or UNESCO's recommendations on the Ethics of Artificial Intelligence (UNESCO, 2022) . Overall, these initiatives aim to increase the trustworthiness of AI technology, especially in application domains where mistakes and misuse ha ve high costs for human well - being and rights . According to European law, health applications represent one such domain . To minimize the se risks, the AI Act prescribes that providers make available to users (or "deployers," in the regulation terminology) some Instructions for Use (IFU) about the systems' capabilities, limitations, and security . These instructions implement the obligation to transparency, facilitat ing an informed, responsible, and proper use of high - risk AI technology.