Goto

Collaborating Authors

Results


Deep Neural Network Based Ensemble learning Algorithms for the healthcare system (diagnosis of chronic diseases)

arXiv.org Artificial Intelligence

Diagnosis of chronic diseases and assistance in medical decisions is based on machine learning algorithms. In this paper, we review the classification algorithms used in the health care system (chronic diseases) and present the neural network-based Ensemble learning method. We briefly describe the commonly used algorithms and describe their critical properties. Materials and Methods: In this study, modern classification algorithms used in healthcare, examine the principles of these methods and guidelines, and to accurately diagnose and predict chronic diseases, superior machine learning algorithms with the neural network-based ensemble learning Is used. To do this, we use experimental data, real data on chronic patients (diabetes, heart, cancer) available on the UCI site. Results: We found that group algorithms designed to diagnose chronic diseases can be more effective than baseline algorithms. It also identifies several challenges to further advancing the classification of machine learning in the diagnosis of chronic diseases. Conclusion: The results show the high performance of the neural network-based Ensemble learning approach for the diagnosis and prediction of chronic diseases, which in this study reached 98.5, 99, and 100% accuracy, respectively.


Reconstruction of high-resolution 6x6-mm OCT angiograms using deep learning

arXiv.org Machine Learning

Abstract: Typical optical coherence tomographic angiography (OCTA) acquisition areas on commercial devices are 3 3-or 6 6-mm. Compared to 3 3-mm angiograms with proper sampling density, 6 6-mm angiograms have significantly lower scan quality, with reduced signal-to-noise ratio and worse shadow artifacts due to undersampling. Here, we propose a deep-learning-based high-resolution angiogram reconstruction network (HARNet) to generate enhanced 6 6-mm superficial vascular complex (SVC) angiograms. The network was trained on data from 3 3-mm and 6 6-mm angiograms from the same eyes. The reconstructed 6ÃŮ6-mm angiograms have significantly lower noise intensity, stronger contrast and better vascular connectivity than the original images. The algorithm did not generate false flow signal at the noise level presented by the original angiograms. The image enhancement produced by our algorithm may improve biomarker measurements and qualitative clinical assessment of 6 6-mm OCTA. 1. Introduction Optical coherence tomographic angiography (OCTA) is a noninvasive imaging technology that can capture retinal and choroidal microvasculature invivo [1]. Clinicians are rapidly adopting OCTA for evaluation of various diseases, including diabetic retinopathy (DR) [2, 3], age-related macular degeneration (AMD) [4, 5], glaucoma [6, 7], and retinal vessel occlusion (RVO) [8, 9].High-resolution and large-field-of-view OCTA improve clinical observations, provide useful biomarkers and enhance the understanding of retinal and choroidal microvascular circulations [10-13].


Patient Similarity Analysis with Longitudinal Health Data

arXiv.org Machine Learning

Healthcare professionals have long envisioned using the enormous processing powers of computers to discover new facts and medical knowledge locked inside electronic health records. These vast medical archives contain time-resolved information about medical visits, tests and procedures, as well as outcomes, which together form individual patient journeys. By assessing the similarities among these journeys, it is possible to uncover clusters of common disease trajectories with shared health outcomes. The assignment of patient journeys to specific clusters may in turn serve as the basis for personalized outcome prediction and treatment selection. This procedure is a non-trivial computational problem, as it requires the comparison of patient data with multi-dimensional and multi-modal features that are captured at different times and resolutions. In this review, we provide a comprehensive overview of the tools and methods that are used in patient similarity analysis with longitudinal data and discuss its potential for improving clinical decision making.


Artificial Intelligence in Cardiology: Present and Future

#artificialintelligence

For the purpose of this narrative review, we searched PubMed and MEDLINE databases with no date restriction using search terms related to AI and medicine and cardiology subspecialties. Articles were reviewed and selected for inclusion on the basis of relevance. This article highlights that the role of ML in cardiovascular medicine is rapidly emerging, and mounting evidence indicates it will power the new tools that drive the field. Among other uses, AI has been deployed to interpret echocardiograms, to automatically identify heart rhythms from an ECG, to uniquely identify an individual using the ECG as a biometric signal, and to detect the presence of heart disease such as left ventricular dysfunction from the surface ECG.6x6Attia, Z.I., Kapa, S., Lopez-Jimenez, F. et al.


TRACER: A Framework for Facilitating Accurate and Interpretable Analytics for High Stakes Applications

arXiv.org Artificial Intelligence

In high stakes applications such as healthcare and finance analytics, the interpretability of predictive models is required and necessary for domain practitioners to trust the predictions. Traditional machine learning models, e.g., logistic regression (LR), are easy to interpret in nature. However, many of these models aggregate time-series data without considering the temporal correlations and variations. Therefore, their performance cannot match up to recurrent neural network (RNN) based models, which are nonetheless difficult to interpret. In this paper, we propose a general framework TRACER to facilitate accurate and interpretable predictions, with a novel model TITV devised for healthcare analytics and other high stakes applications such as financial investment and risk management. Different from LR and other existing RNN-based models, TITV is designed to capture both the time-invariant and the time-variant feature importance using a feature-wise transformation subnetwork and a self-attention subnetwork, for the feature influence shared over the entire time series and the time-related importance respectively. Healthcare analytics is adopted as a driving use case, and we note that the proposed TRACER is also applicable to other domains, e.g., fintech. We evaluate the accuracy of TRACER extensively in two real-world hospital datasets, and our doctors/clinicians further validate the interpretability of TRACER in both the patient level and the feature level. Besides, TRACER is also validated in a high stakes financial application and a critical temperature forecasting application. The experimental results confirm that TRACER facilitates both accurate and interpretable analytics for high stakes applications.


Artificial Intelligence Examining ECGs Predicts Irregular Heartbeat, Death Risk - Docwire News

#artificialintelligence

Artificial intelligence can be used to accurately examine electrocardiogram (ECG) test results, according to the findings of two preliminary studies being presented at the American Heart Association Scientific Sessions 2019 in Philadelphia, PA. In the first study, researchers evaluated 1.1 million ECGs that did indicate atrial fibrillation (AF) from more than 237,000 patients. They used specialized computational hardware to train a deep neutral network to assess 30,000 data points for each respective ECG. The results showed that approximately one in three people received an AF diagnosis within a year. Moreover, the model demonstrated the capacity for long-term prognostic significance as patients predicted to develop AF after one year had a 45% higher hazard rate in developing AF over a follow-up duration of 25-years compared to other patients.


New Study Shows EarlySign's Machine Learning Algorithm Can Predict Which Cardiac Patients are at High-Risk Following Discharge

#artificialintelligence

Medial EarlySign (earlysign.com), a leader in machine-learning based solutions to aid in early detection and prevention of high-burden diseases, today announced the results of new research with Mayo Clinic assessing the effectiveness of machine learning for predicting cardiac patients' future risk trajectories following hospital discharge. The peer-reviewed retrospective data study, Leveraging Machine Learning Techniques to Forecast Patient Prognosis After Percutaneous Coronary Intervention, published in JACC: Cardiovascular Interventions, evaluated the ability of machine learning models to assess risk for patients who underwent percutaneous coronary intervention (PCI) inside the hospital and following their discharge. The analyzed algorithm was developed by Medial EarlySign data scientists to identify patients at highest risk of complications and hospital readmission after undergoing PCI, one of the most frequently performed procedures in U.S. hospitals. "Contemporary risk models have traditionally had little success in identifying patients' post-PCI risks for complications, in-patient mortality, and hospital readmission. This study shows that machine learning tools may enable cardiology care teams to identify patients who may be on high-risk trajectories," said Rajiv Gulati, MD, Ph.D., Interventional Cardiologist at Mayo Clinic.


Reinforcement Learning in Healthcare: A Survey

arXiv.org Artificial Intelligence

As a subfield of machine learning, \emph{reinforcement learning} (RL) aims at empowering one's capabilities in behavioural decision making by using interaction experience with the world and an evaluative feedback. Unlike traditional supervised learning methods that usually rely on one-shot, exhaustive and supervised reward signals, RL tackles with sequential decision making problems with sampled, evaluative and delayed feedback simultaneously. Such distinctive features make RL technique a suitable candidate for developing powerful solutions in a variety of healthcare domains, where diagnosing decisions or treatment regimes are usually characterized by a prolonged and sequential procedure. This survey will discuss the broad applications of RL techniques in healthcare domains, in order to provide the research community with systematic understanding of theoretical foundations, enabling methods and techniques, existing challenges, and new insights of this emerging paradigm. By first briefly examining theoretical foundations and key techniques in RL research from efficient and representational directions, we then provide an overview of RL applications in a variety of healthcare domains, ranging from dynamic treatment regimes in chronic diseases and critical care, automated medical diagnosis from both unstructured and structured clinical data, as well as many other control or scheduling domains that have infiltrated many aspects of a healthcare system. Finally, we summarize the challenges and open issues in current research, and point out some potential solutions and directions for future research.


Effective Learning of Probabilistic Models for Clinical Predictions from Longitudinal Data

arXiv.org Machine Learning

Such information includes: the database in modern hospital systems, usually known as Electronic Health Records (EHR), which store the patients' diagnosis, medication, laboratory test results, medical image data, etc.; information on various health behaviors tracked and stored by wearable devices, ubiquitous sensors and mobile applications, such as the smoking status, alcoholism history, exercise level, sleeping conditions, etc.; information collected by census or various surveys regarding sociodemographic factors of the target cohort; and information on people's mental health inferred from their social media activities or social networks such as Twitter, Facebook, etc. These health-related data come from heterogeneous sources, describe assorted aspects of the individual's health conditions. Such data is rich in structure and information which has great research potentials for revealing unknown medical knowledge about genomic epidemiology, disease developments and correlations, drug discoveries, medical diagnosis, mental illness prevention, health behavior adaption, etc. In real-world problems, the number of features relating to a certain health condition could grow exponentially with the development of new information techniques for collecting and measuring data. To reveal the causal influence between various factors and a certain disease or to discover the correlations among diseases from data at such a tremendous scale, requires the assistance of advanced information technology such as data mining, machine learning, text mining, etc. Machine learning technology not only provides a way for learning qualitative relationships among features and patients, but also the quantitative parameters regarding the strength of such correlations.