Keya Medical has launched the DeepVessel FFR, a software device that utilizes deep learning to facilitate fractional flow reserve (FFR) assessment based on coronary computed tomography angiography (CCTA). Cleared by the Food and Drug Administration (FDA), the DeepVessel FFR provides a three-dimensional coronary artery tree model and estimates of FFR CT value after semi-automated review of CCTA images, according to Keya Medical. The company said the DeepVessel FFR has demonstrated higher accuracy than other non-invasive tests and suggested the software could help reduce invasive procedures for coronary angiography and stent implantation in the diagnostic workup and subsequent treatment of coronary artery disease. Joseph Schoepf, M.D., FACR, FAHA, FNASCI, the principal investigator of a recent multicenter trial to evaluate DeepVessel FFR, says the introduction of the modality in the United States dovetails nicely with recent guidelines for the diagnosis of chest pain. "I am excited to see the implementation of DeepVessel FFR. It comes together with the 2021 ACC/AHA Chest Pain Guidelines' recognition of the elevated diagnostic role of CCTA and FFR CT for the non-invasive evaluation of patients with stable or acute chest pain," noted Dr. Schoepf, a professor of Radiology, Medicine, and Pediatrics at the Medical University South Carolina.
Robotic surgery has many benefits, including faster recovery time, less scarring, less pain, and shorter hospital stays. These factors allow patients to resume normal life faster.The region's highly developed healthcare infrastructure is facilitating the integration of robotics into healthcare facilities, boosting the market for robotic surgery. The prevalence of chronic cardiovascular disease, fueled by rising obesity, is expanding the market opportunity. Some of the major advances expected over the next few years are high-definition cameras and battery-powered computing devices. AI and ML are likely to improve the accuracy and precision of robotic surgery, further improving the reliability of surgery and accelerating the growth of the robotic surgery market.
Ultrasound can provide detailed images of your heart, but the bulk makes it impractical for continuous scanning -- especially outside of the hospital. It might be far more portable in the future, however. Researchers have developed a wearable ultrasound patch that provides real-time heart imagery, even while you're in motion. It also uses deep learning to automatically calculate ventricle volume and generate performance stats. You'd know your cardiac output at any given moment, for instance.
The symptoms can be caused by acute coronary syndrome, pulmonary embolism or aortic dissection, but only a minority of patients who present with ACP are diagnosed with those serious cardiovascular conditions. As such, physicians need to take all cases of ACP very seriously despite the fact that most patients are low risk. Researchers at Massachusetts General Hospital identified deep learning as a potential way to identify high-risk patients and thereby accelerate diagnosis while improving the use of resources. The project centered on the chest radiographs that ACP patients often undergo early in the care pathway. By applying deep learning to the images, the collaborators trained a model to identify signs in the scans that a person may have one of the cardiovascular conditions.
Brain. 2023 Jan 13:awac340. doi: 10.1093/brain/awac340. Online ahead of print.ABSTRACTAssessing the integrity of neural functions in coma after cardiac arrest remains an open challenge. Prognostication of coma outcome relies mainly on visual expert scoring of physiological signals, which is prone to subjectivity and leaves a considerable number of patients in a 'grey zone', with uncertain prognosis. Quantitative analysis of EEG responses to auditory stimuli can provide a window into neural functions in coma and information about patients' chances of awakening. However, responses to standardized auditory stimulation are far from being used in a clinical routine due to heterogeneous and cumbersome protocols. Here, we hypothesize that convolutional neural networks can assist in extracting interpretable patterns of EEG responses to auditory stimuli during the first day of coma that are predictive of patients' chances of awakening and survival at 3 months. We used convolutional neural networks (CNNs) to model single-trial EEG responses to auditory stimuli in the first day of coma, under standardized sedation and targeted temperature management, in a multicentre and multiprotocol patient cohort and predict outcome at 3 months. The use of CNNs resulted in a positive predictive power for predicting awakening of 0.83 ± 0.04 and 0.81 ± 0.06 and an area under the curve in predicting outcome of 0.69 ± 0.05 and 0.70 ± 0.05, for patients undergoing therapeutic hypothermia and normothermia, respectively. These results also persisted in a subset of patients that were in a clinical 'grey zone'. The network's confidence in predicting outcome was based on interpretable features: it strongly correlated to the neural synchrony and complexity of EEG responses and was modulated by independent clinical evaluations, such as the EEG reactivity, background burst-suppression or motor responses. Our results highlight the strong potential of interpretable deep learning algorithms in combination with auditory stimulation to improve prognostication of coma outcome.PMID:36637902 | DOI:10.1093/brain/awac340
Researchers develop tools to help data scientists make the features used in machine-learning models more understandable for end users. Explanation methods that help users understand and trust machine-learning models often describe how much certain features used in the model contribute to its prediction. Researchers develop tools to help data scientists make the features used in machine-learning models more understandable for end users. Explanation methods that help users understand and trust machine-learning models often describe how much certain features used in the model contribute to its prediction. For example, if a model predicts a patient's risk of developing cardiac disease, a physician might want to know how strongly the patient's heart rate data influences that prediction.
Cars can be very dangerous for their occupants--over 1 million people globally die in car accidents every year. Another 20 to 50 million people report nonfatal accident-related injuries annually. This story is from the WIRED World in 2023, our annual trends briefing. Read more stories from the series here--or download or order a copy of the magazine. But driving doesn't have to be this way.
A question has increasingly plagued me since I began studying our relationship with technology about two decades ago: Will we ever pay attention again? The concern arose from measuring the shrinking attention spans of hundreds of knowledge workers in a variety of work roles. Whether we're talking about a Gen Z or a baby boomer, a CEO or an administrative assistant, attention spans on our computers and phones are short and declining. To study people's attention on their devices, with my research team at the University of California, Irvine, and with colleagues at Microsoft Research, I observed people in their natural environments and created living laboratories. We used sophisticated computer logging techniques to measure attention spans and heart rate monitors and wearable devices to measure stress.
Volta Medical, a France-based health technology company developing AI solutions to assist electrophysiologist physicians and surgeons, has secured €36 million in Series B funding. With this, the total funding raised by the company accounts for €70 million. The investment round was led by the US-based Vensana Capital alongside participation from Lightstone Ventures (which backed Dunzo and Nimbus Therapeutics) and existing investor Gilde Healthcare. The funding will help accelerate new product development, support additional clinical trials, prepare for full-scale US commercialisation, and pursue further regulatory approvals. The company's lead product, VOLTA VX1, is the first commercially available AI decision-support software to help guide physicians with identification and real-time annotation of unique abnormalities on 3D anatomical and electrical maps of the heart.