Goto

Collaborating Authors

Government Relations & Public Policy


Transpara breast AI by ScreenPoint Medical reaches major milestone in the lead up to …

#artificialintelligence

It is the first and remains the only DEEP LEARNING system to be FDA cleared for use on both 2D and 3D mammograms and now is first of its kind to …


AliveCor gets FDA nod for suite of cardiac focused AI algorithms

#artificialintelligence

Cardio-focused digital health company AliveCor landed FDA clearance for its new suite of interpretive ECG algorithms, dubbed the Kardia AI V2. This news comes just days after the company announced a $65 million Series E funding round. The new clearance will is able to capture sinus rhythm with premature ventricular contractions, sinus rhythm with supraventricular ectopy and a sinus rhythm with wide QRS. The algorithm works on AliveCor's KardiaMobile and KardiaMobile 6L devices, which even before this latest FDA clearance, have been able to take 30-second ECGs, and are hooked up to a corresponding app. According to the company's release, the algorithm will also reduce the number of unclassified readings, and has improved sensitivity and specificity on the company's normal and atrial fibrillation algorithms. Users will also have new visualization tools that let them see heart beat average, PVC identification and tachogram.


Humanizing AI: How to Close the Trust Gap in Healthcare - InformationWeek

#artificialintelligence

Physician turnover in the United States, due to burnout and related factors, was conservatively estimated to cost the US healthcare system some $4.6 billion annually, according to a 2019 Annals of Internal Medicine study. The results reflect a familiar dynamic, where too many doctors are crushed in paperwork, which takes time away from being with patients. Just five months after this study was publicized, Harvard Business Review published "How AI in the Exam Room Could Reduce Physician Burnout," examining multiple artificial intelligence initiatives that may streamline providers' administrative tasks, thus reducing burnout. Still, barriers to trust in AI solutions remain, highlighted by 2020 KPMG International survey findings that note only 35% of leaders have a high degree of trust in data analytics powered by AI within their own organizations. This lack of confidence even in their own AI-driven solutions underscores the significant trust gap that exists between decision-makers and technology in the current digital era.


How artificial intelligence is changing the GP-patient relationship - Pulse Today

#artificialintelligence

'Alexa, what are the early signs of a stroke?' GPs may no longer be the first port of call for patients looking to understand their ailments. 'Dr Google' is already well established in patients' minds, and now they have a host of apps using artificial intelligence (AI), allowing them to input symptoms and receive a suggested diagnosis or advice without the need for human interaction. And policymakers are on board. Matt Hancock is the most tech-friendly health secretary ever, NHS England chief executive Simon Stevens wants England to lead the world in AI, and the prime minister last month announced £250m for a national AI lab to help cut waiting times and detect diseases earlier. Amazon even agreed a partnership with NHS England in July to allow people to access health information via its voice-activated assistant Alexa.


FDA holds public meeting on AI, focuses on data training bias: The lack of proper data training for AI algorithms used for medical devices can end up being harmful to patients, experts told the FDA. The federal agency held a nearly seven-hour patient engagement meeting on the use of artificial intelligence in healthcare Oct. 22, in which experts addressed the public's questions about machine learning in medical devices.

#artificialintelligence

The lack of proper data training for AI algorithms used for medical devices can end up being harmful to patients, experts told the FDA. The federal agency held a nearly seven-hour patient engagement meeting on the use of artificial intelligence in healthcare Oct. 22, in which experts addressed the public's questions about machine learning in medical devices. Experts and executives in the fields of medicine, regulations, technology and public health discussed the composition of the datasets that train AI-based medical devices. A lack of transparency surrounding the datasets that train algorithms can lead to public mistrust in AI-powered medical tools, as these devices may not have been trained using patient data that accurately represents the individuals they will be treating. During the meeting, Center for Devices and Radiological Health Director Jeffrey Shuren, MD, noted that 562 AI-powered medical devices have received FDA emergency use authorization and pointed out that all patients should be considered when these devices are being developed and regulated.


What do patients think about AI in the clinic? The FDA wants to find out

#artificialintelligence

Autonomous AI systems are rapidly making their way into the health care system, presenting regulators with thorny questions about how to protect data, prevent bias, and make sure constantly evolving machines can operate safely in clinical practice. The urgency of those inquiries will be on display Thursday during a key meeting hosted by the Food and Drug Administration, which is convening patients to collect their perspectives on AI development and regulation. The gathering of the Patient Engagement Advisory Committee comes as the agency considers crossing a crucial threshold: the approval of the first adaptive AI product, in which a system's performance changes based on its use in the real world. To date, the FDA has only approved locked systems that produce the same result based on the same input. Unlock this article by subscribing to STAT and enjoy your first 30 days free!


Machine learning shows similar performance to traditional risk prediction models

#artificialintelligence

Some claim that machine learning technology has the potential to transform healthcare systems, but a study published by The BMJ finds that machine learning models have similar performance to traditional statistical models and share similar uncertainty in making risk predictions for individual patients. The NHS has invested £250m ($323m; €275m) to embed machine learning in healthcare, but researchers say the level of consistency (stability) within and between models should be assessed before they are used to make treatment decisions for individual patients. Risk prediction models are widely used in clinical practice. They use statistical techniques alongside information about people, such as their age and ethnicity, to identify those at high risk of developing an illness and make decisions about their care. Previous research has found that a traditional risk prediction model such as QRISK3 has very good model performance at the population level, but has considerable uncertainty on individual risk prediction.


Rapid coronavirus antigen tests may give false positives, FDA warns

FOX News

Our technology has advanced, our diagnostics have improved and our testing capability has advanced since the beginning of this pandemic, says Dr. Nicole Saphier, Fox News medical contributor. The Food and Drug Administration (FDA) warned about the possibility of false positives that can occur when using rapid antigen tests to detect coronavirus, particularly if the test is not used correctly. The regulatory agency said it has received reports of false-positive results occurring in nursing homes and other health care settings. The agency warned that reading the test results either before or after the specified time provided in the instructions can result in false-positive or false-negative results. It also referenced the antigen EUA conditions of authorization, which specifies that authorized laboratories are to follow the manufacturer's instructions for use regarding administering the test and reading the results.


How Might Artificial Intelligence Applications Impact Risk Management?

#artificialintelligence

Artificial intelligence (AI) applications have attracted considerable ethical attention for good reasons. Although AI models might advance human welfare in unprecedented ways, progress will not occur without substantial risks. This article considers 3 such risks: system malfunctions, privacy protections, and consent to data repurposing. To meet these challenges, traditional risk managers will likely need to collaborate intensively with computer scientists, bioinformaticists, information technologists, and data privacy and security experts. This essay will speculate on the degree to which these AI risks might be embraced or dismissed by risk management.


Council Post: Where Is Artificial Intelligence Now, And Where Should Your Company Be?

#artificialintelligence

We are near the end of the hype cycle for artificial intelligence (AI). The human champion of the game of Go decided to retire, saying AI cannot be beaten after AlphaGo defeated him. Domain-specific chatbots are engaging with customers and providing them with the answers they need. AI is about to revolutionize our broken health-care system. Is your company ready for AI? Anyone with deep data claims to be using AI.