Goto

Collaborating Authors

DENS-ECG: A Deep Learning Approach for ECG Signal Delineation

arXiv.org Machine Learning

Objectives: With the technological advancements in the field of tele-health monitoring, it is now possible to gather huge amounts of electro-physiological signals such as electrocardiogram (ECG). It is therefore necessary to develop models/algorithms that are capable of analysing these massive amounts of data in real-time. This paper proposes a deep learning model for real-time segmentation of heartbeats. Methods: The proposed algorithm, named as the DENS-ECG algorithm, combines convolutional neural network (CNN) and long short-term memory (LSTM) model to detect onset, peak, and offset of different heartbeat waveforms such as the P-wave, QRS complex, T-wave, and No wave (NW). Using ECG as the inputs, the model learns to extract high level features through the training process, which, unlike other classical machine learning based methods, eliminates the feature engineering step. Results: The proposed DENS-ECG model was trained and validated on a dataset with 105 ECGs of length 15 minutes each and achieved an average sensitivity and precision of 97.95% and 95.68%, respectively, using a 5-fold cross validation. Additionally, the model was evaluated on an unseen dataset to examine its robustness in QRS detection, which resulted in a sensitivity of 99.61% and precision of 99.52%. Conclusion: The empirical results show the flexibility and accuracy of the combined CNN-LSTM model for ECG signal delineation. Significance: This paper proposes an efficient and easy to use approach using deep learning for heartbeat segmentation, which could potentially be used in real-time tele-health monitoring systems.


Deep Learning Examples: R2020a Edition

#artificialintelligence

With two releases every year, you may find it challenging to keep up with the latest features.* In fact, some people who work here feel the same way! This release, I asked the Product Managers about the new features related to deep learning that they think you should know about in release 20a. Here are their responses: Deep Learning Starting with Deep Learning Toolbox, there are three new features to get excited about in 20a. Experiment Manager (new) - A new app that keeps track all conditions when training neural networks.


Open AI Caribbean Data Science Challenge

#artificialintelligence

The following post is from Neha Goel, Champion of student competitions and online data science competitions. She's here to promote a new Deep Learning challenge available to everyone. If you win, you get money, plus a bonus if you use MATLAB. We at MathWorks, in collaboration with DrivenData, are excited to bring you this challenge. Through this challenge you'll be working with a real-world dataset of drone aerial imagery (big images) for classification.


Open AI Caribbean Data Science Challenge

#artificialintelligence

The following post is from Neha Goel, Champion of student competitions and online data science competitions. She's here to promote a new Deep Learning challenge available to everyone. If you win, you get money, plus a bonus if you use MATLAB. We at MathWorks, in collaboration with DrivenData, are excited to bring you this challenge. Through this challenge you'll be working with a real-world dataset of drone aerial imagery (big images) for classification.


Deep Learning for Computer Vision with MATLAB

#artificialintelligence

Computer vision engineers have used machine learning techniques for decades to detect objects of interest in images and to classify or identify categories of objects. They extract features representing points, regions, or objects of interest and then use those features to train a model to classify or learn patterns in the image data. In traditional machine learning, feature selection is a time-consuming manual process. Feature extraction usually involves processing each image with one or more image processing operations, such as calculating gradient to extract the discriminative information from each image. Deep learning algorithms can learn features, representations, and tasks directly from images, text, and sound, eliminating the need for manual feature selection.