Collaborating Authors

Learning from Higher-Layer Feature Visualizations Machine Learning

Driven by the goal to enable sleep apnea monitoring and machine learning-based detection at home with small mobile devices, we investigate whether interpretation-based indirect knowledge transfer can be used to create classifiers with acceptable performance. Interpretation-based indirect knowledge transfer means that a classifier (student) learns from a synthetic dataset based on the knowledge representation from an already trained Deep Network (teacher). We use activation maximization to generate visualizations and create a synthetic dataset to train the student classifier. This approach has the advantage that student classifiers can be trained without access to the original training data. With experiments we investigate the feasibility of interpretation-based indirect knowledge transfer and its limitations. The student achieves an accuracy of 97.8% on MNIST (teacher accuracy: 99.3%) with a similar smaller architecture to that of the teacher. The student classifier achieves an accuracy of 86.1% and 89.5% for a subset of the Apnea-ECG dataset (teacher: 89.5% and 91.1%, respectively).

Detection of Obstructive Sleep Apnoea Using Features Extracted from Segmented Time-Series ECG Signals Using a One Dimensional Convolutional Neural Network Machine Learning

Steven Thompson Computer Science Liverpool John Moores University Liverpool, Merseyside S.R.Thompson@LJMU.AC.UK Denis Reilly Computer Science Liverpool John Moores University Liverpool, Merseyside D.Reilly@LJMU.AC.UK Paul Fergus Computer Science Liverpool John Moores University Liverpool, Merseysde P.Fergus@LJMU.AC.UK Carl Chalmers Computer Science Liverpool John Moores University Liverpool, Merseyside C.Chalmers@LJMU.AC.UK Abstract --The study in this paper presents a one-dimensional convolutional neural network (1DCNN) model, designed for the automated detection of obstructive Sleep Apnoea (OSA) captured from single-channel electrocardiogram (ECG) signals. The system provides mechanisms in clinical practice that help diagnose patients suffering with OSA. Using the state-of-the-art in 1DCNNs, a model is constructed using convolutional, max pooling layers and a fully connected Multilayer Perceptron (MLP) consisting of a hidden layer and SoftMax output for classification. The 1DCNN extracts prominent features, which are used to train an MLP. The model is trained using segmented ECG signals grouped into 5 unique datasets of set window sizes. A total of 6514 minutes of Apnoea was recorded. This demonstrates the model can identify the presence of Apnoea with a high degree of accuracy. Obstructive Sleep Apnoea (OSA), is a sleep disorder that interrupts the natural rhythm of a person's breathing whilst they are sleeping.

Forecasting Sleep Apnea with Dynamic Network Models Artificial Intelligence

Dynamic network models (DNMs) are belief networks for temporal reasoning. The DNM methodology combines techniques from time series analysis and probabilistic reasoning to provide (1) a knowledge representation that integrates noncontemporaneous and contemporaneous dependencies and (2) methods for iteratively refining these dependencies in response to the effects of exogenous influences. We use belief-network inference algorithms to perform forecasting, control, and discrete event simulation on DNMs. The belief network formulation allows us to move beyond the traditional assumptions of linearity in the relationships among time-dependent variables and of normality in their probability distributions. We demonstrate the DNM methodology on an important forecasting problem in medicine. We conclude with a discussion of how the methodology addresses several limitations found in traditional time series analyses.

Automated Polysomnography Analysis for Detection of Non-Apneic and Non-Hypopneic Arousals using Feature Engineering and a Bidirectional LSTM Network Machine Learning

Objective: The aim of this study is to develop an automated classification algorithm for polysomnography (PSG) recordings to detect non-apneic and non-hypopneic arousals. Our particular focus is on detecting the respiratory effort-related arousals (RERAs) which are very subtle respiratory events that do not meet the criteria for apnea or hypopnea, and are more challenging to detect. Methods: The proposed algorithm is based on a bidirectional long short-term memory (BiLSTM) classifier and 465 multi-domain features, extracted from multimodal clinical time series. The features consist of a set of physiology-inspired features (n = 75), obtained by multiple steps of feature selection and expert analysis, and a set of physiology-agnostic features (n = 390), derived from scattering transform. Results: The proposed algorithm is validated on the 2018 PhysioNet challenge dataset. The overall performance in terms of the area under the precision-recall curve (AUPRC) is 0.50 on the hidden test dataset. This result is tied for the second-best score during the follow-up and official phases of the 2018 PhysioNet challenge. Conclusions: The results demonstrate that it is possible to automatically detect subtle non-apneic/non-hypopneic arousal events from PSG recordings. Significance: Automatic detection of subtle respiratory events such as RERAs together with other non-apneic/non-hypopneic arousals will allow detailed annotations of large PSG databases. This contributes to a better retrospective analysis of sleep data, which may also improve the quality of treatment.

Teacher-Student Domain Adaptation for Biosensor Models Machine Learning

We present an approach to domain adaptation, addressing the case where data from the source domain is abundant, labelled data from the target domain is limited or non-existent, and a small amount of paired source-target data is available. The method is designed for developing deep learning models that detect the presence of medical conditions based on data from consumer-grade portable biosensors. It addresses some of the key problems in this area, namely, the difficulty of acquiring large quantities of clinically labelled data from the biosensor, and the noise and ambiguity that can affect the clinical labels. The idea is to pre-train an expressive model on a large dataset of labelled recordings from a sensor modality for which data is abundant, and then to adapt the model's lower layers so that its predictions on the target modality are similar to the original model's on paired examples from the source modality. We show that the pre-trained model's predictions provide a substantially better learning signal than the clinician-provided labels, and that this teacher-student technique significantly outperforms both a naive application of supervised deep learning and a label-supervised version of domain adaptation on a synthetic dataset and in a real-world case study on sleep apnea. By reducing the volume of data required and obviating the need for labels, our approach should reduce the cost associated with developing high-performance deep learning models for biosensors.