Collaborating Authors

First Ever Artificial Intelligence/Machine Learning Action Plan by FDA


Last week, the U.S. Food and Drug Administration presented the organization's first Artificial Intelligence/Machine Learning (AI/ML)- Based Software as a Medical Device (SaMD) Action Plan. This plan portrays a multi-pronged way to deal with the Agency's oversight of AI/ML-based medical software. The Artificial Intelligence/Machine Learning (AI/ML)- Based Software as a Medical Device (SaMD) Action Plan is a response to stakeholder input on the FDA's 2019 regulatory structure for AI and ML-based medical items. FDA additionally will hold a public workshop on algorithm transparency and draw in its stakeholders and partners on other key activities, for example, assessing predisposition in algorithms. While the Action Plan proposes a guide for propelling a regulatory framework, an operational structure gives off an impression of being further down the road.

Past and Current Regulations around Artificial Intelligence in SaMD


Editor's note: This is the second part of a two-part series. The first installment can be found here. The first part installment in this series examined the benefits of Artificial Intelligence/Machine Learning (AI/ML) and noted the considerations that regulatory bodies are studying for use with AI/ML algorithms. The second and final installment explores the past and current regulations, and summarizes the latest framework proposed by the U.S. Food and Drug Administration's (FDA). While current guidance around AI/ML implementation in medical devices is lacking, the FDA is working to solve the problem.

Reviewing Key Principles from FDA's Artificial Intelligence White Paper JD Supra


In April 2019, the US Food and Drug Administration (FDA) issued a white paper, "Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device," announcing steps to consider a new regulatory framework to promote the development of safe and effective medical devices that use advanced AI algorithms. AI, and specifically ML, are "techniques used to design and train software algorithms to learn from and act on data." FDA's proposed approach would allow modifications to algorithms to be made from real-world learning and adaptation that accommodates the iterative nature of AI products while ensuring FDA's standards for safety and effectiveness are maintained. Under the existing framework, a premarket submission (i.e., a 510(k)) would be required if the AI/ML software modification significantly affects device performance or the device's safety and effectiveness; the modification is to the device's intended use; or the modification introduces a major change to the software as a medical device (SaMD) algorithm. In the case of a PMA-approved SaMD, a PMA supplement would be required for changes that affect safety or effectiveness.

AMIA calls on FDA to refine its AI regulatory framework


The American Medical Informatics Association wants the Food and Drug Administration to improve its conceptual approach to regulating medical devices that leverage self-updating artificial intelligence algorithms. The FDA sees tremendous potential in healthcare for AI algorithms that continually evolve--called "adaptive" or "continuously learning" algorithms--that don't need manual modification to incorporate learning or updates. While AMIA supports an FDA discussion paper on the topic released in early April, the group is calling on the agency to make further refinements to the Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD). "Properly regulating AI and machine learning-based SaMD will require ongoing dialogue between FDA and stakeholders," said AMIA President and CEO Douglas Fridsma, MD, in a written statement. "This draft framework is only the beginning of a vital conversation to improve both patient safety and innovation. We certainly look forward to continuing it."

Medtechs need strategy to prevent bias in AI-machine learning-based devices: FDA


Jeff Shuren, director of the FDA's Center for Devices and Radiological Health, on Thursday called out the need for better methodologies for identification and improvement of algorithms prone to mirroring "systemic biases" in the healthcare system and the data used to train artificial intelligence and machine learning-based devices, speaking at an FDA public workshop on the topic. The medical device industry should develop a strategy to enroll racially and ethnically diverse populations in clinical trials. "It's essential that the data used to train [these] devices represent the intended patient population with regards to age, gender, sex, race and ethnicity," Shuren said. The virtual workshop comes nine months after the agency released an action plan for establishing a regulatory approach to AI/ML-based Software as a Medical Device (SaMD). Among the five actions laid out in the plan, FDA intends to foster a patient-centered approach that includes device transparency for users.