A novel device designed to help stroke patients recover wrist and hand function has been approved by the US Food and Drug Administration (FDA). Called IpsiHand, the system is the first brain-computer interface (BCI) device to ever receive FDA market approval. The IpsiHand device consists of two separate parts – a wireless exoskeleton that is positioned over the wrist, and a small headpiece that records brain activity using non-invasive electroencephalography (EEG) electrodes. The system is based on a discovery made by Eric Leuthardt and colleagues at the Washington University School of Medicine over a decade ago. It is well known that each side of the brain controls movement on the opposite side of the body, so if a stroke damages motor function on the right side of the brain movement on a person's left side will be affected.
Imbio has gained FDA 510(k) clearance for its RV/LV AnalysisTM algorithm, a leading supplier of artificial intelligence (AI) solutions for medical imaging evaluation. The RV/LV Analysis algorithm is a quick and easy way to check for right ventricular dilation. The tool efficiently and precisely evaluates the heart's ventricles to calculate the proportion of the right to left ventricle's maximum diameter. The RV/LV Analysis results are readily accessible for clinicians without any extra work, including a detailed report of quantitative findings directly attached to the patient imaging study in minutes. David Hannes, Imbio Chief Executive Officer, stated that their automated RV/LV Assessment has the control to supply factual information and notify risk stratification in many acute cases. Imbio is proud to offer this AI-driven algorithm to physicians and partners to support acute cases and facilitate critical treatment decisions for patients.
Last week, the U.S. Food and Drug Administration presented the organization's first Artificial Intelligence/Machine Learning (AI/ML)- Based Software as a Medical Device (SaMD) Action Plan. This plan portrays a multi-pronged way to deal with the Agency's oversight of AI/ML-based medical software. The Artificial Intelligence/Machine Learning (AI/ML)- Based Software as a Medical Device (SaMD) Action Plan is a response to stakeholder input on the FDA's 2019 regulatory structure for AI and ML-based medical items. FDA additionally will hold a public workshop on algorithm transparency and draw in its stakeholders and partners on other key activities, for example, assessing predisposition in algorithms. While the Action Plan proposes a guide for propelling a regulatory framework, an operational structure gives off an impression of being further down the road.
I frequently emphasize the importance of data in the U.S. Food and Drug Administration's work as a science-based regulatory agency, and the need to "unleash the power of data" through sophisticated mechanisms for collection, review and analysis so that it may become preventive, action-oriented information. As one example of this commitment, I would like to tell you about cross-cutting work the agency is undertaking to leverage our use of artificial intelligence (AI) as part of the FDA's New Era of Smarter Food Safety initiative. This work promises to equip the FDA with important new ways to apply available data sources to strengthen our public health mission. The ultimate goal is to see if AI can improve our ability to quickly and efficiently identify products that may pose a threat to public health. One area in which the FDA is assessing the use of AI is in the screening of imported foods.
The lack of proper data training for AI algorithms used for medical devices can end up being harmful to patients, experts told the FDA. The federal agency held a nearly seven-hour patient engagement meeting on the use of artificial intelligence in healthcare Oct. 22, in which experts addressed the public's questions about machine learning in medical devices. Experts and executives in the fields of medicine, regulations, technology and public health discussed the composition of the datasets that train AI-based medical devices. A lack of transparency surrounding the datasets that train algorithms can lead to public mistrust in AI-powered medical tools, as these devices may not have been trained using patient data that accurately represents the individuals they will be treating. During the meeting, Center for Devices and Radiological Health Director Jeffrey Shuren, MD, noted that 562 AI-powered medical devices have received FDA emergency use authorization and pointed out that all patients should be considered when these devices are being developed and regulated.
At a virtual meeting of the U.S. Food and Drug Administration's Center for Devices and Radiological Health and Patient Engagement Advisory Committee on Thursday, regulators offered updates and new discussion around medical devices and decision support powered by artificial intelligence. One of the topics on the agenda was how to strike a balance between safety and innovation with algorithms getting smarter and better trained by the day. In his discussion of AI and machine learning validation, Bakul Patel, director of the FDA's recently-launched Digital Health Center of Excellence, said he sees huge breakthroughs on the horizon. "This new technology is going to help us get to a different place and a better place," said Patel. You're seeing automated image diagnostics.
While AI and machine learning have the potential for transforming healthcare, the technology has inherent biases that could negatively impact patient care, senior FDA officials and Philips' head of global software standards said at the meeting. Bakul Patel, director of FDA's new Digital Health Center of Excellence, acknowledged significant challenges to AI/ML adoption including bias and the lack of large, high-quality and well-curated datasets. "There are some constraints because of just location or the amount of information available and the cleanliness of the data might drive inherent bias. We don't want to set up a system and we would not want to figure out after the product is out in the market that it is missing a certain type of population or demographic or other other aspects that we would have accidentally not realized," Patel said. Pat Baird, Philips' head of global software standards, warned without proper context there will be "improper use" of AI/ML-based devices that provide "incorrect conclusions" provided as part of clinical decision support.
The healthcare industry is increasingly focusing on niche patient populations. Around half of FDA approvals in the past two years were for rare or orphan drugs that serve fewer than 200,000 patients in total in the US and 1 in 2,000 patients in Europe. By 2024, orphan drug sales are expected to capture one-fifth of worldwide prescription sales. However, finding these hard-to-reach patients is difficult and keeping them engaged over time even more so. Could machine learning platforms that deliver personalized experiences for patients and caregivers be part of the answer?