Keya Medical has launched the DeepVessel FFR, a software device that utilizes deep learning to facilitate fractional flow reserve (FFR) assessment based on coronary computed tomography angiography (CCTA). Cleared by the Food and Drug Administration (FDA), the DeepVessel FFR provides a three-dimensional coronary artery tree model and estimates of FFR CT value after semi-automated review of CCTA images, according to Keya Medical. The company said the DeepVessel FFR has demonstrated higher accuracy than other non-invasive tests and suggested the software could help reduce invasive procedures for coronary angiography and stent implantation in the diagnostic workup and subsequent treatment of coronary artery disease. Joseph Schoepf, M.D., FACR, FAHA, FNASCI, the principal investigator of a recent multicenter trial to evaluate DeepVessel FFR, says the introduction of the modality in the United States dovetails nicely with recent guidelines for the diagnosis of chest pain. "I am excited to see the implementation of DeepVessel FFR. It comes together with the 2021 ACC/AHA Chest Pain Guidelines' recognition of the elevated diagnostic role of CCTA and FFR CT for the non-invasive evaluation of patients with stable or acute chest pain," noted Dr. Schoepf, a professor of Radiology, Medicine, and Pediatrics at the Medical University South Carolina.
Data being presented at the EHRA 2022 shows Cardiologs' deep neural network model is better than Apple Watch ECG 2.0 algorithm at detecting & classifying irregular heart rhythms "Wearable devices, such as the Apple Watch, are capable of recording a single-lead ECG to determine cardiac rhythm. However, a large proportion of the smartwatch readings come back as inconclusive. This problem was solved using Cardiologs' AI algorithm. The data showed Cardiologs' AI performed equally as well but, remarkably, the results were almost never inconclusive," said Dr. Laurent Fiorina. The study included 101 patients in a typical tertiary care hospital who were assessed for atrial arrhythmia (AA).
In 2021 the application of AI enabled advances in many areas of healthcare. We made significant progress in AI for drug discovery, medical imaging, diagnostics, pathology, and clinical trials. Important peer reviewed papers were published and dozens of partnerships were formed. Big Pharma companies and major tech companies became very active in the space. Record amounts of funding were raised, and a few companies even started human clinical trials. Microsoft and NVIDIA launched two of the world's most powerful supercomputers and Microsoft announced Azure OpenAI Service. In 2022 we expect these technologies to converge across the healthcare spectrum. This article summarizes milestones achieved in 2021. This is the first in a series of progress reports I'm writing on the sector that will be supplemented by industry performance data and metrics compiled in partnership with Alliance for Artificial Intelligence in Healthcare (AAIH) and other top tier resources.
Zhai, Bing, Guan, Yu, Catt, Michael, Ploetz, Thomas
Sleep is a fundamental physiological process that is essential for sustaining a healthy body and mind. The gold standard for clinical sleep monitoring is polysomnography(PSG), based on which sleep can be categorized into five stages, including wake/rapid eye movement sleep (REM sleep)/Non-REM sleep 1 (N1)/Non-REM sleep 2 (N2)/Non-REM sleep 3 (N3). However, PSG is expensive, burdensome, and not suitable for daily use. For long-term sleep monitoring, ubiquitous sensing may be a solution. Most recently, cardiac and movement sensing has become popular in classifying three-stage sleep, since both modalities can be easily acquired from research-grade or consumer-grade devices (e.g., Apple Watch). However, how best to fuse the data for the greatest accuracy remains an open question. In this work, we comprehensively studied deep learning (DL)-based advanced fusion techniques consisting of three fusion strategies alongside three fusion methods for three-stage sleep classification based on two publicly available datasets. Experimental results demonstrate important evidence that three-stage sleep can be reliably classified by fusing cardiac/movement sensing modalities, which may potentially become a practical tool to conduct large-scale sleep stage assessment studies or long-term self-tracking on sleep. To accelerate the progression of sleep research in the ubiquitous/wearable computing community, we made this project open source, and the code can be found at: https://github.com/bzhai/Ubi-SleepNet.
Electrocardiogram (ECG) is the most widely used diagnostic tool to monitor the condition of the human heart. By using deep neural networks (DNNs), interpretation of ECG signals can be fully automated for the identification of potential abnormalities in a patient's heart in a fraction of a second. Studies have shown that given a sufficiently large amount of training data, DNN accuracy for ECG classification could reach human-expert cardiologist level. However, despite of the excellent performance in classification accuracy, DNNs are highly vulnerable to adversarial noises that are subtle changes in the input of a DNN and may lead to a wrong class-label prediction. It is challenging and essential to improve robustness of DNNs against adversarial noises, which are a threat to life-critical applications. In this work, we proposed a regularization method to improve DNN robustness from the perspective of noise-to-signal ratio (NSR) for the application of ECG signal classification. We evaluated our method on PhysioNet MIT-BIH dataset and CPSC2018 ECG dataset, and the results show that our method can substantially enhance DNN robustness against adversarial noises generated from adversarial attacks, with a minimal change in accuracy on clean data.
Huang, Yunyou, Zhang, Zhifei, Wang, Nana, Li, Nengquan, Du, Mengjia, Hao, Tianshu, Zhan, Jianfeng
These authors contributed equally to this work. Artificial intelligence (AI) researchers claim that they have made great'achievements' in clinical realms. However, clinicians point out the so-called'achievements' have no ability to implement into natural clinical settings. The root cause for this huge gap is that many essential features of natural clinical tasks are overlooked by AI system developers without medical background. In this paper, we propose that the clinical benchmark suite is a novel and promising direction to capture the essential features of the real-world clinical tasks, hence qualifies itself for guiding the development of AI systems, promoting the implementation of AI in real-world clinical practice. AI researchers claim that they have obtained many significant'achievements' in various However, in practice, most of the AI products fail to obtain approval from the Food and Drug Administration (FDA). AI devices are not qualified handling high-risk tasks such as clinical diagnosis .
Artificial intelligence (AI) was by far the hottest trend discussed in sessions and across the expo floor at the world's largest radiology conference, the 2018 Radiological Society Of North America (RSNA). At the meeting in late November, there was an explosion of AI and deep learning algorithms across the expo floor. How machine learning will impact medical imaging was the key takeaway from the opening session, where examples of how AI will alter medical imaging in the near future were highlighted. Here is an overview of the types of AI software being developed and a few examples from RSNA that are specific to cardiovascular imaging. Artificial intelligence has been a growing topic in past years at RSNA, but this year several companies showed products that recently gained U.S. Food and Drug Administration (FDA) market clearance.
Big pharma is taking an AI-first approach. Apple is revolutionizing clinical studies. We look at the top artificial intelligence trends reshaping healthcare. Healthcare is emerging as a prominent area for AI research and applications. And nearly every area across the industry will be impacted by the technology's rise. Image recognition, for example, is revolutionizing diagnostics. Recently, Google DeepMind's neural networks matched the accuracy of medical experts in diagnosing 50 sight-threatening eye diseases. Even pharma companies are experimenting with deep learning to design new drugs. For example, Merck partnered with startup Atomwise and GlaxoSmithKline is partnering with Insilico Medicine.
Artificial intelligence originally aspired to replace doctors. Researchers imagined robots that could ask you questions, run the answers through an algorithm that would learn with experience and tell whether you had the flu or a cold. However, those promises largely failed, as artificial intelligent algorithms were too rudimentary to perform those functions. Particularly tricky was the variability between people, which caused basic machine learning algorithms to miss the patterns. Eventually though, a subset of AI called deep learning became sensitive enough to recognize speech from voice data.