Artificial intelligence and machine learning technologies have the potential to transform health care by deriving new and important insights from the vast amount of data generated during the delivery of health care every day. Medical device manufacturers are using these technologies to innovate their products to better assist health care providers and improve patient care. The FDA is considering a total product lifecycle-based regulatory framework for these technologies that would allow for modifications to be made from real-world learning and adaptation, while still ensuring that the safety and effectiveness of the software as a medical device is maintained. Artificial Intelligence has been broadly defined as the science and engineering of making intelligent machines, especially intelligent computer programs (McCarthy, 2007). Artificial intelligence can use different techniques, including models based on statistical analysis of data, expert systems that primarily rely on if-then statements, and machine learning.
The American Medical Informatics Association wants the Food and Drug Administration to improve its conceptual approach to regulating medical devices that leverage self-updating artificial intelligence algorithms. The FDA sees tremendous potential in healthcare for AI algorithms that continually evolve--called "adaptive" or "continuously learning" algorithms--that don't need manual modification to incorporate learning or updates. While AMIA supports an FDA discussion paper on the topic released in early April, the group is calling on the agency to make further refinements to the Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD). "Properly regulating AI and machine learning-based SaMD will require ongoing dialogue between FDA and stakeholders," said AMIA President and CEO Douglas Fridsma, MD, in a written statement. "This draft framework is only the beginning of a vital conversation to improve both patient safety and innovation. We certainly look forward to continuing it."
For quite a while, artificial intelligence and machine learning models are leveraged in the healthcare industry to improve patient outcomes. They have been utilized in various scans, for diagnosing various diseases, for the drug manufacturing and planning the treatment for various diseases. The involvement of these AI/ML models is observed in the surgical process as well. With the amount of data being generated nowadays, the traditional AI/M- based software models are often scrutinized under the lens of performance and accuracy. As new advances are shaping the future of healthcare, the modification of the existing software models has been recognized by healthcare professionals.
Last week, the U.S. Food and Drug Administration presented the organization's first Artificial Intelligence/Machine Learning (AI/ML)- Based Software as a Medical Device (SaMD) Action Plan. This plan portrays a multi-pronged way to deal with the Agency's oversight of AI/ML-based medical software. The Artificial Intelligence/Machine Learning (AI/ML)- Based Software as a Medical Device (SaMD) Action Plan is a response to stakeholder input on the FDA's 2019 regulatory structure for AI and ML-based medical items. FDA additionally will hold a public workshop on algorithm transparency and draw in its stakeholders and partners on other key activities, for example, assessing predisposition in algorithms. While the Action Plan proposes a guide for propelling a regulatory framework, an operational structure gives off an impression of being further down the road.
While AI and machine learning have the potential for transforming healthcare, the technology has inherent biases that could negatively impact patient care, senior FDA officials and Philips' head of global software standards said at the meeting. Bakul Patel, director of FDA's new Digital Health Center of Excellence, acknowledged significant challenges to AI/ML adoption including bias and the lack of large, high-quality and well-curated datasets. "There are some constraints because of just location or the amount of information available and the cleanliness of the data might drive inherent bias. We don't want to set up a system and we would not want to figure out after the product is out in the market that it is missing a certain type of population or demographic or other other aspects that we would have accidentally not realized," Patel said. Pat Baird, Philips' head of global software standards, warned without proper context there will be "improper use" of AI/ML-based devices that provide "incorrect conclusions" provided as part of clinical decision support.