As part of the G7's health track artificial intelligence (AI) governance workstream 2021, member states committed to the creation of 2 deliverables on the subject of governance: These papers are complementary and should therefore be read in combination to gain a more complete picture of the G7's stance on the governance of AI in health. This paper is the result of a concerted effort by G7 nations to contribute to the creation of harmonised principles for the evaluation of AI/ML-enabled medical devices, and the promotion of their effectiveness, performance, safety and ethicality. A total of 3 working group sessions were held to reach consensus on the content of this paper. The rapid emergence of AI/ML-enabled medical devices provides novel challenges to current regulatory and governance systems, which are based on more traditional forms of Software as a Medical Device (SaMD). Regulators, international standards bodies[footnote 2] and health technology assessors across the world are grappling with how they can provide assurance that AI/ML-enabled medical devices are safe, effective and performant – not just under test conditions but in the real world.
With the withdrawal of the U.K. from the European Union, MHRA as part of its new Brexit freedoms is moving to update the country's regulations for software and AI as a medical device without the burden of accommodating the regulatory approaches of EU members. "These measures demonstrate the U.K.'s commitment, following our exit from the European Union, to drive innovation in healthcare and improve patient outcomes," states MHRA's announcement. "Regulatory measures will be updated to further protect patient safety and take account of these technological advances." AI and SaMD technologies have the potential for better diagnosing and treating a wide variety of diseases, but FDA has yet to finalize a regulatory framework for machine learning-based software as a medical device. The agency is considering a total product lifecycle-based regulatory framework for adaptive or continuously learning algorithms.
I am regularly asked to summarize my many posts. I thought it would be a good idea to publish on this blog, every Monday, some of the most relevant articles that I have already shared with you on my social networks. Today I will share some of the most relevant articles about Artificial Intelligence and in what form you can find it in today's life. I will also comment on the articles. AI is everywhere, but how does it make decisions, balance society, and remain free from bias? These days, technology is making leaps in an unprecedented manner.
Health care organizations are using artificial intelligence (AI)--which the U.S. Food and Drug Administration defines as "the science and engineering of making intelligent machines"--for a growing range of clinical, administrative, and research purposes. This AI software can, for example, help health care providers diagnose diseases, monitor patients' health, or assist with rote functions such as scheduling patients. Although AI offers unique opportunities to improve health care and patient outcomes, it also comes with potential challenges. AI-enabled products, for example, have sometimes resulted in inaccurate, even potentially harmful, recommendations for treatment.1 These errors can be caused by unanticipated sources of bias in the information used to build or train the AI, inappropriate weight given to certain data points analyzed by the tool, and other flaws. The regulatory framework governing these tools is complex. FDA regulates some--but not all--AI-enabled products used in health care, and the agency plays an important role in ensuring the safety and effectiveness of those products under its jurisdiction. The agency is currently considering how to adapt its review process for AI-enabled medical devices that have the ability to evolve rapidly in response to new data, sometimes in ways that are difficult to foresee.2 This brief describes current and potential uses of AI in health care settings and the challenges these technologies pose, outlines how and under what circumstances they are regulated by FDA, and highlights key questions that will need to be addressed to ensure that the benefits of these devices outweigh their risks.
Jeff Shuren, director of the FDA's Center for Devices and Radiological Health, on Thursday called out the need for better methodologies for identification and improvement of algorithms prone to mirroring "systemic biases" in the healthcare system and the data used to train artificial intelligence and machine learning-based devices, speaking at an FDA public workshop on the topic. The medical device industry should develop a strategy to enroll racially and ethnically diverse populations in clinical trials. "It's essential that the data used to train [these] devices represent the intended patient population with regards to age, gender, sex, race and ethnicity," Shuren said. The virtual workshop comes nine months after the agency released an action plan for establishing a regulatory approach to AI/ML-based Software as a Medical Device (SaMD). Among the five actions laid out in the plan, FDA intends to foster a patient-centered approach that includes device transparency for users.