murmur detection
SS-DPPN: A self-supervised dual-path foundation model for the generalizable cardiac audio representation
Muna, Ummy Maria, Shawon, Md Mehedi Hasan, Jobayer, Md, Akter, Sumaiya, Hasan, Md Rakibul, Alam, Md. Golam Rabiul
The automated analysis of phonocardiograms is vital for the early diagnosis of cardiovascular disease, yet supervised deep learning is often constrained by the scarcity of expert-annotated data. In this paper, we propose the Self-Supervised Dual-Path Prototypical Network (SS-DPPN), a foundation model for cardiac audio representation and classification from unlabeled data. The framework introduces a dual-path contrastive learning based architecture that simultaneously processes 1D waveforms and 2D spectrograms using a novel hybrid loss. For the downstream task, a metric-learning approach using a Prototypical Network was used that enhances sensitivity and produces well-calibrated and trustworthy predictions. SS-DPPN achieves state-of-the-art performance on four cardiac audio benchmarks. The framework demonstrates exceptional data efficiency with a fully supervised model on three-fold reduction in labeled data. Finally, the learned representations generalize successfully across lung sound classification and heart rate estimation. Our experiments and findings validate SS-DPPN as a robust, reliable, and scalable foundation model for physiological signals.
- Asia > Bangladesh > Dhaka Division > Dhaka District > Dhaka (0.04)
- North America > United States > Maryland (0.04)
- Oceania > Australia (0.04)
- (2 more...)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
Scattering Transformer: A Training-Free Transformer Architecture for Heart Murmur Detection
In an attempt to address the need for skilled clinicians in heart sound interpretation, recent research efforts on automating cardiac auscultation have explored deep learning approaches. The majority of these approaches have been based on supervised learning that is always challenged in occasions where training data is limited. More recently, there has been a growing interest in potentials of pre-trained self-supervised audio foundation models for biomedical end tasks. Despite exhibiting promising results, these foundational models are typically computationally intensive. Within the context of automatic cardiac auscultation, this study explores a lightweight alternative to these general-purpose audio foundation models by introducing the Scattering Transformer, a novel, training-free transformer architecture for heart murmur detection. The proposed method leverages standard wavelet scattering networks by introducing contextual dependencies in a transformer-like architecture without any backpropagation. We evaluate our approach on the public CirCor DigiScope dataset, directly comparing it against leading general-purpose foundational models. The Scattering Transformer achieves a Weighted Accuracy(WAR) of 0.786 and an Unweighted Average Recall(UAR) of 0.697, demonstrating performance highly competitive with contemporary state of the art methods. This study establishes the Scattering Transformer as a viable and promising alternative in resource-constrained setups.
- Europe > Portugal > Coimbra > Coimbra (0.04)
- Africa > Middle East > Egypt > Alexandria Governorate > Alexandria (0.04)
Congenital Heart Disease Classification Using Phonocardiograms: A Scalable Screening Tool for Diverse Environments
Jabbar, Abdul, Grooby, Ethan, Crozier, Jack, Gallon, Alexander, Pham, Vivian, Ahmad, Khawza I, Hassanuzzaman, Md, Mostafa, Raqibul, Khandoker, Ahsan H., Marzbanrad, Faezeh
Congenital heart disease (CHD) is a critical condition that demands early detection, particularly in infancy and childhood. This study presents a deep learning model designed to detect CHD using phonocardiogram (PCG) signals, with a focus on its application in global health. We evaluated our model on several datasets, including the primary dataset from Bangladesh, achieving a high accuracy of 94.1%, sensitivity of 92.7%, specificity of 96.3%. The model also demonstrated robust performance on the public PhysioNet Challenge 2022 and 2016 datasets, underscoring its generalizability to diverse populations and data sources. We assessed the performance of the algorithm for single and multiple auscultation sites on the chest, demonstrating that the model maintains over 85% accuracy even when using a single location. Furthermore, our algorithm was able to achieve an accuracy of 80% on low-quality recordings, which cardiologists deemed non-diagnostic. This research suggests that an AI- driven digital stethoscope could serve as a cost-effective screening tool for CHD in resource-limited settings, enhancing clinical decision support and ultimately improving patient outcomes.
- Asia > Bangladesh > Dhaka Division > Dhaka District > Dhaka (0.04)
- South America > Brazil > Pernambuco (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Model-driven Heart Rate Estimation and Heart Murmur Detection based on Phonocardiogram
Nie, Jingping, Liu, Ran, Mahasseni, Behrooz, Azemi, Erdrin, Mitra, Vikramjit
Acoustic signals are crucial for health monitoring, particularly heart sounds which provide essential data like heart rate and detect cardiac anomalies such as murmurs. This study utilizes a publicly available phonocardiogram (PCG) dataset to estimate heart rate using model-driven methods and extends the best-performing model to a multi-task learning (MTL) framework for simultaneous heart rate estimation and murmur detection. Heart rate estimates are derived using a sliding window technique on heart sound snippets, analyzed with a combination of acoustic features (Mel spectrogram, cepstral coefficients, power spectral density, root mean square energy). Our findings indicate that a 2D convolutional neural network (\textbf{\texttt{2dCNN}}) is most effective for heart rate estimation, achieving a mean absolute error (MAE) of 1.312 bpm. We systematically investigate the impact of different feature combinations and find that utilizing all four features yields the best results. The MTL model (\textbf{\texttt{2dCNN-MTL}}) achieves accuracy over 95% in murmur detection, surpassing existing models, while maintaining an MAE of 1.636 bpm in heart rate estimation, satisfying the requirements stated by Association for the Advancement of Medical Instrumentation (AAMI).
A Method for Detecting Murmurous Heart Sounds based on Self-similar Properties
Vimalajeewa, Dixon, Lee, Chihoon, Vidakovic, Brani
A heart murmur is an atypical sound produced by the flow of blood through the heart. It can be a sign of a serious heart condition, so detecting heart murmurs is critical for identifying and managing cardiovascular diseases. However, current methods for identifying murmurous heart sounds do not fully utilize the valuable insights that can be gained by exploring intrinsic properties of heart sound signals. To address this issue, this study proposes a new discriminatory set of multiscale features based on the self-similarity and complexity properties of heart sounds, as derived in the wavelet domain. Self-similarity is characterized by assessing fractal behaviors, while complexity is explored by calculating wavelet entropy. We evaluated the diagnostic performance of these proposed features for detecting murmurs using a set of standard classifiers. When applied to a publicly available heart sound dataset, our proposed wavelet-based multiscale features achieved comparable performance to existing methods with fewer features. This suggests that self-similarity and complexity properties in heart sounds could be potential biomarkers for improving the accuracy of murmur detection.
- Europe > Portugal > Coimbra > Coimbra (0.04)
- South America > Brazil (0.04)
- North America > United States > New York (0.04)
Heart Murmur and Abnormal PCG Detection via Wavelet Scattering Transform & a 1D-CNN
Patwa, Ahmed, Rahman, Muhammad Mahboob Ur, Al-Naffouri, Tareq Y.
This work leverages deep learning (DL) techniques in order to do automatic and accurate heart murmur detection from phonocardiogram (PCG) recordings. Two public PCG datasets (CirCor Digiscope 2022 dataset and PCG 2016 dataset) from Physionet online database are utilized to train and test three custom neural networks (NN): a 1D convolutional neural network (CNN), a long short-term memory (LSTM) recurrent neural network (RNN), and a convolutional RNN (C-RNN). Under our proposed method, we first do pre-processing on both datasets in order to prepare the data for the NNs. Key pre-processing steps include the following: denoising, segmentation, re-labeling of noise-only segments, data normalization, and time-frequency analysis of the PCG segments using wavelet scattering transform. To evaluate the performance of the three NNs we have implemented, we conduct four experiments, first three using PCG 2022 dataset, and fourth using PCG 2016 dataset. It turns out that our custom 1D-CNN outperforms other two NNs (LSTM- RNN and C-RNN) as well as the state-of-the-art. Specifically, for experiment E1 (murmur detection using original PCG 2022 dataset), our 1D-CNN model achieves an accuracy of 82.28%, weighted accuracy of 83.81%, F1-score of 65.79%, and and area under receive operating charactertic (AUROC) curve of 90.79%. For experiment E2 (mumur detection using PCG 2022 dataset with unknown class removed), our 1D-CNN model achieves an accuracy of 87.05%, F1-score of 87.72%, and AUROC of 94.4%. For experiment E3 (murmur detection using PCG 2022 dataset with re-labeling of segments), our 1D-CNN model achieves an accuracy of 82.86%, weighted accuracy of 86.30%, F1-score of 81.87%, and AUROC of 93.45%. For experiment E4 (abnormal PCG detection using PCG 2016 dataset), our 1D-CNN model achieves an accuracy of 96.30%, F1-score of 96.29% and AUROC of 98.17%.
- Europe > Portugal > Coimbra > Coimbra (0.04)
- North America > United States > Michigan (0.04)
- South America > Brazil (0.04)
- Asia > Indonesia > Java > Yogyakarta > Yogyakarta (0.04)
Murmur Detection Using Parallel Recurrent & Convolutional Neural Networks
Alam, Shahnawaz, Banerjee, Rohan, Bandyopadhyay, Soma
In this article, we propose a novel technique for classification of the Murmurs in heart sound. We introduce a novel deep neural network architecture using parallel combination of the Recurrent Neural Network (RNN) based Bidirectional Long Short-Term Memory (BiLSTM) & Convolutional Neural Network (CNN) to learn visual and time-dependent characteristics of Murmur in PCG waveform. Set of acoustic features are presented to our proposed deep neural network to discriminate between Normal and Murmur class. The proposed method was evaluated on a large dataset using 5-fold cross-validation, resulting in a sensitivity and specificity of 96 +- 0.6 % , 100 +- 0 % respectively and F1 Score of 98 +- 0.3 %.
- Europe > United Kingdom > England > Greater London > London (0.05)
- North America > United States > Michigan (0.04)
- Asia > India > West Bengal > Kolkata (0.04)
- Research Report > New Finding (0.47)
- Research Report > Promising Solution (0.34)