Collaborating Authors


Deliverable 1: principles for the evaluation of artificial intelligence or machine learning-enabled medical devices to assure safety, effectiveness and ethicality


As part of the G7's health track artificial intelligence (AI) governance workstream 2021, member states committed to the creation of 2 deliverables on the subject of governance: These papers are complementary and should therefore be read in combination to gain a more complete picture of the G7's stance on the governance of AI in health. This paper is the result of a concerted effort by G7 nations to contribute to the creation of harmonised principles for the evaluation of AI/ML-enabled medical devices, and the promotion of their effectiveness, performance, safety and ethicality. A total of 3 working group sessions were held to reach consensus on the content of this paper. The rapid emergence of AI/ML-enabled medical devices provides novel challenges to current regulatory and governance systems, which are based on more traditional forms of Software as a Medical Device (SaMD). Regulators, international standards bodies[footnote 2] and health technology assessors across the world are grappling with how they can provide assurance that AI/ML-enabled medical devices are safe, effective and performant – not just under test conditions but in the real world.