Goto

Collaborating Authors

 chf


Hybrid Deep Convolutional Neural Networks Combined with Autoencoders And Augmented Data To Predict The Look-Up Table 2006

Djeddou, Messaoud, Hellal, Aouatef, Hameed, Ibrahim A., Zhao, Xingang, Dallal, Djehad Al

arXiv.org Artificial Intelligence

This study explores the development of a hybrid deep convolutional neural network (DCNN) model enhanced by autoencoders and data augmentation techniques to predict critical heat flux (CHF) with high accuracy. By augmenting the original input features using three different autoencoder configurations, the model's predictive capabilities were significantly improved. The hybrid models were trained and tested on a dataset of 7225 samples, with performance metrics including the coefficient of determination (R2), Nash-Sutcliffe efficiency (NSE), mean absolute error (MAE), and normalized root-mean-squared error (NRMSE) used for evaluation. Among the tested models, the DCNN_3F-A2 configuration demonstrated the highest accuracy, achieving an R2 of 0.9908 during training and 0.9826 during testing, outperforming the base model and other augmented versions. These results suggest that the proposed hybrid approach, combining deep learning with feature augmentation, offers a robust solution for CHF prediction, with the potential to generalize across a wider range of conditions.


A Fast and Simple Algorithm for computing the MLE of Amplitude Density Function Parameters

Teimouri, Mahdi

arXiv.org Machine Learning

Over the last decades, the family of $\alpha$-stale distributions has proven to be useful for modelling in telecommunication systems. Particularly, in the case of radar applications, finding a fast and accurate estimation for the amplitude density function parameters appears to be very important. In this work, the maximum likelihood estimator (MLE) is proposed for parameters of the amplitude distribution. To do this, the amplitude data are \emph{projected} on the horizontal and vertical axes using two simple transformations. It is proved that the \emph{projected} data follow a zero-location symmetric $\alpha$-stale distribution for which the MLE can be computed quite fast. The average of computed MLEs based on two \emph{projections} is considered as estimator for parameters of the amplitude distribution. Performance of the proposed \emph{projection} method is demonstrated through simulation study and analysis of two sets of real radar data.


Interpretable estimation of the risk of heart failure hospitalization from a 30-second electrocardiogram

González, Sergio, Hsieh, Wan-Ting, Burba, Davide, Chen, Trista Pei-Chun, Wang, Chun-Li, Wu, Victor Chien-Chia, Chang, Shang-Hung

arXiv.org Artificial Intelligence

Survival modeling in healthcare relies on explainable statistical models; yet, their underlying assumptions are often simplistic and, thus, unrealistic. Machine learning models can estimate more complex relationships and lead to more accurate predictions, but are non-interpretable. This study shows it is possible to estimate hospitalization for congestive heart failure by a 30 seconds single-lead electrocardiogram signal. Using a machine learning approach not only results in greater predictive power but also provides clinically meaningful interpretations. We train an eXtreme Gradient Boosting accelerated failure time model and exploit SHapley Additive exPlanations values to explain the effect of each feature on predictions. Our model achieved a concordance index of 0.828 and an area under the curve of 0.853 at one year and 0.858 at two years on a held-out test set of 6,573 patients. These results show that a rapid test based on an electrocardiogram could be crucial in targeting and treating high-risk individuals.


A robust algorithm for explaining unreliable machine learning survival models using the Kolmogorov-Smirnov bounds

Kovalev, Maxim S., Utkin, Lev V.

arXiv.org Machine Learning

A new robust algorithm based of the explanation method SurvLIME called SurvLIME-KS is proposed for explaining machine learning survival models. The algorithm is developed to ensure robustness to cases of a small amount of training data or outliers of survival data. The first idea behind SurvLIME-KS is to apply the Cox proportional hazards model to approximate the black-box survival model at the local area around a test example due to the linear relationship of covariates in the model. The second idea is to incorporate the well-known Kolmogorov-Smirnov bounds for constructing sets of predicted cumulative hazard functions. As a result, the robust maximin strategy is used, which aims to minimize the average distance between cumulative hazard functions of the explained black-box model and of the approximating Cox model, and to maximize the distance over all cumulative hazard functions in the interval produced by the Kolmogorov-Smirnov bounds. The maximin optimization problem is reduced to the quadratic program. Various numerical experiments with synthetic and real datasets demonstrate the SurvLIME-KS efficiency.


SurvLIME: A method for explaining machine learning survival models

Kovalev, Maxim S., Utkin, Lev V., Kasimov, Ernest M.

arXiv.org Machine Learning

Many complex problems in various applications are solved by means of deep machine learning models, in particular deep neural networks, at the present time. One of the demonstrative examples is the disease diagnosis by the models on the basis of medical images or another medical information. At the same time, deep learning models often work as black-box models such that details of their functioning are often completely unknown. It is difficult to explain in this case how a certain result or decision is achieved. As a result, the machine learning models meet some difficulties in their incorporating into many important applications, for example, into medicine, where doctors have to have an explanation of a stated diagnosis in order to choose a corresponding treatment. The lack of the explanation elements in many machine learning models has motivated development of many methods which could interpret or explain the deep machine learning algorithm predictions and understand the decisionmaking process or the key factors involved in the decision [4, 18, 35, 36]. The methods explaining the black-box machine learning models can be divided into two main groups: local methods which derive explanation locally around a test example; global methods which try to explain the overall behavior of the model. A key component of explanations for models is the contribution of individual input features. It is assumed that a prediction is explained when every feature is assigned by some number quantified its impact on the prediction.


Time evolution of the characteristic and probability density function of diffusion processes via neural networks

Uy, Wayne Isaac Tan, Grigoriu, Mircea

arXiv.org Machine Learning

We investigate the use of physics-informed neural networks-based solution of the PDE satisfied by the probability density function (pdf) of the state of a dynamical system subject to random forcing. Two alternatives for the PDE are considered: the Fokker-Planck equation and a PDE for the characteristic function (chf) of the state, both of which provide the same probabilistic information. Solving these PDEs using the finite element method is unfeasible when the dimension of the state is larger than 3. We examine analytically and numerically the advantages and disadvantages of solving the corresponding PDE of one over the other. It is also demonstrated how prior information of the dynamical system can be exploited to design and simplify the neural network architecture. Numerical examples show that: 1) the neural network solution can approximate the target solution even for partial integro-differential equations and system of PDEs, 2) solving either PDE using neural networks yields similar pdfs of the state, and 3) the solution to the PDE can be used to study the behavior of the state for different types of random forcings.


Artificial intelligence neural network approach detected heart failure from a si

#artificialintelligence

Another wave of enthusiasm hit the media in recent weeks unfolding the endless opportunities for artificial intelligence (AI) in service for medicine. This time the "new AI neural network approach detected heart failure from a single heartbeat with 100% diagnostic accuracy" – an energetic and simple message - was shared and reshared across social media and beyond. Lay readers fueled the rolling snowball with comments and views suggesting that Goliath of medicine has been successfully defeated by magic capabilities of AI. But, are we really done with heart failure? The paper, which gave grounds for these discussions was published online in the Biomedical Signal Processing and Control by Mihaela Porumb et al (1).The authors have implemented in a very elegant manner a new approach for analysis of electrocardiogram (ECG) using the hierarchical neural networks that mimic the human visual system called Convolutional Neural Network (CNN or ConvNets) (2). This method being a class of deep neural networks, allows for image recognition and classification, is used for object or face recognition.


AI detects congestive heart failure with one heartbeat

#artificialintelligence

A new study has reported success in identifying severe heart failure in 100% of cases using a single heartbeat recording from an electrocardiogram (ECG). Medically, the condition called congestive heart failure (CHF) refers to a chronic loss of pumping power in the heart which is progressive. It is fairly common, causes significant illness and disability, and pushes up the costs of medical care. It affects about 26 million people around the world, and is more common in the elderly. It causes a considerable number of deaths, with about 40% mortality among the most severe cases.


AI Can Detect Heart Failure With 100% Accuracy By Hearing Just A Single Heartbeat

#artificialintelligence

In the recent past, it's become easier to detect heart conditions with technology. The Apple Watch has become pretty good at detecting arrhythmia for instance. But some researchers have been developing AI to detect heart problems, and one team may have the best version yet. According to a recent study published in the Biomedical Signal Processing and Control Journal, a team of researchers from the Universities of Surrey, Warwick and Florence have a new neural network that can detect cardiac anomalies from a single heartbeat with 100% accuracy. Their AI can quickly and accurately detect congestive heart failure (CHF) by analyzing one heartbeat on an electrocardiogram (ECG).


Novel AI system proves 100% accurate at detecting heart failure from a single heartbeat

#artificialintelligence

Nearly 10 percent of adults over the age of 65 suffer from some kind of congestive heart failure (CHF). There are a variety of different causes for CHF but the fundamental chronic condition generally results from the heart being unable to pump blood effectively through the body. X-rays, blood tests, and ultrasounds all offer clinicians useful ways to diagnose CHF, but one of the more common methods involves using electrocardiogram (ECG) signals to determine heart rate variability over a number of minutes, or even multiple measurements over days. An impressive new approach has now been demonstrated, using a convolutional neural network (CNN) that can identify CHF nearly instantly by checking ECG data from just one heartbeat. "We trained and tested the CNN model on large publicly available ECG datasets featuring subjects with CHF as well as healthy, non-arrhythmic hearts," says Sebastian Massaro, from the University of Surrey.