Goto

Collaborating Authors

Artificial Intelligence Improves Coronary Artery Disease Detection, New Study Finds

#artificialintelligence

A new study published, Automated Echocardiographic Detection of Severe Coronary Artery Disease Using Artificial Intelligence, found that Ultromics' EchoGo platform, powered by Artificial Intelligence (AI), significantly improved the accuracy and confidence in coronary artery disease (CAD) detection from a stress echocardiogram. The study, published in JACC: Cardiovascular Imaging, revealed doctors are more confident in CAD diagnoses with the support of EchoGo. Further, the study revealed that the EchoGo AI improved the sensitivity of detection of the disease by over 10%, which brings the effectiveness of stress echocardiography – as a modality – on par with much more expensive and invasive diagnostic procedures. Ultromics' has developed an AI algorithm that automatically analyzes stress echocardiograms and tells the clinician whether that patient is at risk of coronary artery disease. The study assessed Ultromics' deep learning algorithms trained on hundreds of thousands of data to predict CAD risk, and was able to distinguish the CAD risk in patients with 10% more sensitivity than manual analysis.


Ensemble machine learning approach for screening of coronary heart disease based on echocardiography and risk factors

arXiv.org Machine Learning

Background: Extensive clinical evidence suggests that a preventive screening of coronary heart disease (CHD) at an earlier stage can greatly reduce the mortality rate. We use 64 two-dimensional speckle tracking echocardiography (2D-STE) features and seven clinical features to predict whether one has CHD. Methods: We develop a machine learning approach that integrates a number of popular classification methods together by model stacking, and generalize the traditional stacking method to a two-step stacking method to improve the diagnostic performance. Results: By borrowing strengths from multiple classification models through the proposed method, we improve the CHD classification accuracy from around 70% to 87.7% on the testing set. The sensitivity of the proposed method is 0.903 and the specificity is 0.843, with an AUC of 0.904, which is significantly higher than those of the individual classification models. Conclusions: Our work lays a foundation for the deployment of speckle tracking echocardiography-based screening tools for coronary heart disease.


The Evolution of the Role for Artificial Intelligence in Nuclear Cardiology - American College of Cardiology

#artificialintelligence

The field of nuclear cardiology has evolved over the last several decades. The field has advanced from basic first-pass radionuclide ventriculography to gated myocardial perfusion imaging with single-photon emission computed tomography (SPECT) with solid state cadmium zinc telluride cameras. There has also been adjunctive use of computed tomography (CT) technology for attenuation correction with the benefit of utilizing the transmission CT image to obtain additional information on the presence or absence of coronary calcification for additional prognostic information. Additionally, there has also been development of gated myocardial perfusion acquisition with positron emission tomography (PET) and use of CT transmission imaging for attenuation correction as well. The utilization of PET cardiac imaging has led to the ability to assess myocardial blood flow, thus improving sensitivity and specificity of myocardial PET imaging.


Early Myocardial Infarction Detection over Multi-view Echocardiography

arXiv.org Artificial Intelligence

Myocardial infarction (MI) is the leading cause of mortality in the world that occurs due to a blockage of the coronary arteries feeding the myocardium. An early diagnosis of MI and its localization can mitigate the extent of myocardial damage by facilitating early therapeutic interventions. Following the blockage of a coronary artery, the regional wall motion abnormality (RWMA) of the ischemic myocardial segments is the earliest change to set in. Echocardiography is the fundamental tool to assess any RWMA. Assessing the motion of the left ventricle (LV) wall only from a single echocardiography view may lead to missing the diagnosis of MI as the RWMA may not be visible on that specific view. Therefore, in this study, we propose to fuse apical 4-chamber (A4C) and apical 2-chamber (A2C) views in which a total of 11 myocardial segments can be analyzed for MI detection. The proposed method first estimates the motion of the LV wall by Active Polynomials (APs), which extract and track the endocardial boundary to compute myocardial segment displacements. The features are extracted from the A4C and A2C view displacements, which are fused and fed into the classifiers to detect MI. The main contributions of this study are 1) creation of a new benchmark dataset by including both A4C and A2C views in a total of 260 echocardiography recordings, which is publicly shared with the research community, 2) improving the performance of the prior work of threshold-based APs by a Machine Learning based approach, and 3) a pioneer MI detection approach via multi-view echocardiography by fusing the information of A4C and A2C views. Experimental results show that the proposed method achieves 90.91% sensitivity and 86.36% precision for MI detection over multi-view echocardiography.


A deep neural network predicts survival after heart imaging better than cardiologists

arXiv.org Artificial Intelligence

Dept of Radiology, Geisinger, Danville, PA, United States Summary Predicting future clinical events, such as death, is an important task in medicine that helps physicians guide appropriate action. Neural networks have particular promise to assist with medical prediction tasks related to clinical imaging by learning patterns from large datasets. Moreover, neural networks have not yet been applied to medical videos on a large scale, such as ultrasound of the heart (echocardiography). Here we show that a large dataset of 723,754 clinically-acquired echocardiographic videos ( 45 million images) linked to longitudinal follow- up data in 27,028 patients can be used to train a deep neural network to predict 1-year survival with good accuracy. We also demonstrate that prediction accuracy can be further improved by adding highly predictive clinical variables from the electronic health record. Finally, in a blinded, independent test set, the trained neural network was more accurate in discriminating 1-year survival outcomes than two expert cardiologists. These results therefore highlight the potential of neural networks to add new predictive power to clinical image interpretations.