For the first time, researchers from the London School of Hygiene & Tropical Medicine, Mercator Research Institute on Global Commons and Climate Change and the University of Leeds deploy machine learning algorithms to scan evidence on climate change and health across the world. Funded by the Foreign, Commonwealth and Development Office, they used machine learning to map the global published evidence on climate change, weather and health from 2013 to 2020 and produce an online interactive results platform. The approach identified the effects on health of air quality and heat to be the most frequently studied in an evidence base dominated by studies from high-income countries and China. There is currently very limited evidence from low- and middle-income countries that suffer most from the health consequences of climate change. Evidence on the impact of climate change on mental health and on maternal and child health is extremely limited.
A deep learning classifier for detecting seizures in neonates is proposed. This architecture is designed to detect seizure events from raw electroencephalogram (EEG) signals as opposed to the state-of-the-art hand engineered feature-based representation employed in traditional machine learning based solutions. The seizure detection system utilises only convolutional layers in order to process the multichannel time domain signal and is designed to exploit the large amount of weakly labelled data in the training stage. The system performance is assessed on a large database of continuous EEG recordings of 834h in duration; this is further validated on a held-out publicly available dataset and compared with two baseline SVM based systems. The developed system achieves a 56% relative improvement with respect to a feature-based state-of-the art baseline, reaching an AUC of 98.5%; this also compares favourably both in terms of performance and run-time. The effect of varying architectural parameters is thoroughly studied. The performance improvement is achieved through novel architecture design which allows more efficient usage of available training data and end-to-end optimisation from the front-end feature extraction to the back-end classification. The proposed architecture opens new avenues for the application of deep learning to neonatal EEG, where the performance becomes a function of the amount of training data with less dependency on the availability of precise clinical labels.
EEG is the gold standard for seizure detection in the newborn infant, but EEG interpretation in the preterm group is particularly challenging; trained experts are scarce and the task of interpreting EEG in real-time is arduous. Preterm infants are reported to have a higher incidence of seizures compared to term infants. Preterm EEG morphology differs from that of term infants, which implies that seizure detection algorithms trained on term EEG may not be appropriate. The task of developing preterm specific algorithms becomes extra-challenging given the limited amount of annotated preterm EEG data available. This paper explores novel deep learning (DL) architectures for the task of neonatal seizure detection in preterm infants. The study tests and compares several approaches to address the problem: training on data from full-term infants; training on data from preterm infants; training on age-specific preterm data and transfer learning. The system performance is assessed on a large database of continuous EEG recordings of 575h in duration. It is shown that the accuracy of a validated term-trained EEG seizure detection algorithm, based on a support vector machine classifier, when tested on preterm infants falls well short of the performance achieved for full-term infants. An AUC of 88.3% was obtained when tested on preterm EEG as compared to 96.6% obtained when tested on term EEG. When re-trained on preterm EEG, the performance marginally increases to 89.7%. An alternative DL approach shows a more stable trend when tested on the preterm cohort, starting with an AUC of 93.3% for the term-trained algorithm and reaching 95.0% by transfer learning from the term model using available preterm data.
OBJECTIVES: Misdiagnosis of acute and chronic otitis media in children can result in significant consequences from either undertreatment or overtreatment. Our objective was to develop and train an artificial intelligence algorithm to accurately predict the presence of middle ear effusion in pediatric patients presenting to the operating room for myringotomy and tube placement. METHODS: We trained a neural network to classify images as “ normal” (no effusion) or “abnormal” (effusion present) using tympanic membrane images from children taken to the operating room with the intent of performing myringotomy and possible tube placement for recurrent acute otitis media or otitis media with effusion. Model performance was tested on held-out cases and fivefold cross-validation. RESULTS: The mean training time for the neural network model was 76.0 (SD ± 0.01) seconds. Our model approach achieved a mean image classification accuracy of 83.8% (95% confidence interval [CI]: 82.7–84.8). In support of this classification accuracy, the model produced an area under the receiver operating characteristic curve performance of 0.93 (95% CI: 0.91–0.94) and F1-score of 0.80 (95% CI: 0.77–0.82). CONCLUSIONS: Artificial intelligence–assisted diagnosis of acute or chronic otitis media in children may generate value for patients, families, and the health care system by improving point-of-care diagnostic accuracy. With a small training data set composed of intraoperative images obtained at time of tympanostomy tube insertion, our neural network was accurate in predicting the presence of a middle ear effusion in pediatric ear cases. This diagnostic accuracy performance is considerably higher than human-expert otoscopy-based diagnostic performance reported in previous studies.
Sleep apnea is a disorder that has serious consequences for the pediatric population. There has been recent concern that traditional diagnosis of the disorder using the apnea-hypopnea index may be ineffective in capturing its multi-faceted outcomes. In this work, we take a first step in addressing this issue by phenotyping patients using a clustering analysis of airflow time series. This is approached in three ways: using feature-based fuzzy clustering in the time and frequency domains, and using persistent homology to study the signal from a topological perspective. The fuzzy clusters are analyzed in a novel manner using a Dirichlet regression analysis, while the topological approach leverages Takens embedding theorem to study the periodicity properties of the signals.
Estimating dynamic treatment regimes (DTRs) from retrospective observational data is challenging as some degree of unmeasured confounding is often expected. In this work, we develop a framework of estimating properly defined "optimal" DTRs with a time-varying instrumental variable (IV) when unmeasured covariates confound the treatment and outcome, rendering the potential outcome distributions only partially identified. We derive a novel Bellman equation under partial identification, use it to define a generic class of estimands (termed IV-optimal DTRs), and study the associated estimation problem. We then extend the IV-optimality framework to tackle the policy improvement problem, delivering IV-improved DTRs that are guaranteed to perform no worse and potentially better than a pre-specified baseline DTR. Importantly, our IV-improvement framework opens up the possibility of strictly improving upon DTRs that are optimal under the no unmeasured confounding assumption (NUCA). We demonstrate via extensive simulations the superior performance of IV-optimal and IV-improved DTRs over the DTRs that are optimal only under the NUCA. In a real data example, we embed retrospective observational registry data into a natural, two-stage experiment with noncompliance using a time-varying IV and estimate useful IV-optimal DTRs that assign mothers to high-level or low-level neonatal intensive care units based on their prognostic variables.
Charlottesville, VA, USA, March 31, 2021--Unbound Medicine, a leader in knowledge management solutions for healthcare, today announced a major upgrade to their end-to-end digital publishing platform. To enhance clinical decision support capabilities for professional societies and healthcare institutions, Unbound developed Unbound Intelligence (UBI)‒exclusive artificial intelligence and machine learning tools to help clinicians keep up to date with current research, as well as discover and fill knowledge gaps. Unbound Intelligence quickly analyzes large volumes of data and recommends options for next steps in patient management. While clinicians answer questions or research areas of interest on the Unbound Platform, UBI instantly filters through available resources, including the most up-to-date primary literature, to suggest closely related topics and relevant, recently published journal articles. This allows clinicians to quickly expand their reach and discover evidence-based guidance that may have otherwise gone unnoticed.
To automate skeletal muscle segmentation in a pediatric population using convolutional neural networks that identify and segment the L3 level at CT. In this retrospective study, two sets of U-Net–based models were developed to identify the L3 level in the sagittal plane and segment the skeletal muscle from the corresponding axial image. For model development, 370 patients (sampled uniformly across age group from 0 to 18 years and including both sexes) were selected between January 2009 and January 2019, and ground truth L3 location and skeletal muscle segmentation were manually defined. Twenty percent (74 of 370) of the examinations were reserved for testing the L3 locator and muscle segmentation, while the remaining were used for training. For the L3 locator models, maximum intensity projections (MIPs) from a fixed number of central sections of sagittal reformats (either 12 or 18 sections) were used as input with or without transfer learning using an L3 localizer trained on an external dataset (four models total).
The current practice for assessing neonatal postoperative pain relies on bedside caregivers. This practice is subjective, inconsistent, slow, and discontinuous. To develop a reliable medical interpretation, several automated approaches have been proposed to enhance the current practice. These approaches are unimodal and focus mainly on assessing neonatal procedural (acute) pain. As pain is a multimodal emotion that is often expressed through multiple modalities, the multimodal assessment of pain is necessary especially in case of postoperative (acute prolonged) pain. Additionally, spatio-temporal analysis is more stable over time and has been proven to be highly effective at minimizing misclassification errors. In this paper, we present a novel multimodal spatio-temporal approach that integrates visual and vocal signals and uses them for assessing neonatal postoperative pain. We conduct comprehensive experiments to investigate the effectiveness of the proposed approach. We compare the performance of the multimodal and unimodal postoperative pain assessment, and measure the impact of temporal information integration. The experimental results, on a real-world dataset, show that the proposed multimodal spatio-temporal approach achieves the highest AUC (0.87) and accuracy (79%), which are on average 6.67% and 6.33% higher than unimodal approaches. The results also show that the integration of temporal information markedly improves the performance as compared to the non-temporal approach as it captures changes in the pain dynamic. These results demonstrate that the proposed approach can be used as a viable alternative to manual assessment, which would tread a path toward fully automated pain monitoring in clinical settings, point-of-care testing, and homes.