Sleep
Major breakthrough reveals new state of consciousness that could unlock more of your brain
Researchers have discovered that lucid dreaming is more than just a vivid sleep state, it's actually a whole other state of consciousness. Lucid dreaming occurs when a person becomes aware they are dreaming, often gaining the ability to control the dream's events. For instance, they might fly, walk through walls, or confront fears, taking advantage of the limitless possibilities. Previously, scientists believed lucid dreams were simply more vivid or intense versions of the typical dreams that occur during REM (rapid eye movement) sleep, which is a normal phase of the sleep cycle characterized by increased brain activity. But this new study shows that brain activity patterns during a lucid dream are entirely different from those that occur during regular dreams and wakefulness.
Your Galaxy Watch could get a major sleep apnea upgrade, thanks to AI and Stanford
Your next Galaxy Watch could do more than simply diagnose sleep apnea, thanks to a recent partnership with Stanford University. Samsung announced on Tuesday that the tech giant is teaming up with Stanford Medicine to enhance its obstructive sleep apnea feature on the smartwatch. The partnership's goal is to uncover ways and features that could not only recognize sleep apnea in a Galaxy Watch wearer but also provide meaningful insights for managing the condition. The tech brand plans to use AI to further this goal. Samsung's obstructive sleep apnea feature has received de novo classification, a regulatory pathway that authorizes new health devices that are not created upon a "predicate device," from the US Food and Drug Administration.
Robust Sleep Staging over Incomplete Multimodal Physiological Signals via Contrastive Imagination
Multimodal physiological signals, such as EEG, EOG and EMG, provide rich and reliable physiological information for automated sleep staging (ASS). However, in the real world, the completeness of various modalities is difficult to guarantee, which seriously affects the performance of ASS based on multimodal learning. Furthermore, the exploration of temporal context information within PSs is also a serious challenge. To this end, we propose a robust multimodal sleep staging framework named contrastive imagination modality sleep network (CIMSleepNet). Specifically, CIMSleepNet handles the issue of arbitrary modal missing through the combination of modal awareness imagination module (MAIM) and semantic & modal calibration contrastive learning (SMCCL).
DreamCatcher: A Wearer-aware Sleep Event Dataset Based on Earables in Non-restrictive Environments Zeyu Wang
Poor quality sleep can be characterized by the occurrence of events ranging from body movement to breathing impairment. Widely available earbuds equipped with sensors (also known as earables) can be combined with a sleep event detection algorithm to offer a convenient alternative to laborious clinical tests for individuals suffering from sleep disorders. Although various solutions utilizing such devices have been proposed to detect sleep events, they ignore the fact that individuals often share sleeping spaces with roommates or couples. To address this issue, we introduce DreamCatcher, the first publicly available dataset for wearer-aware sleep event algorithm development on earables. DreamCatcher encompasses eight distinct sleep events, including synchronous dual-channel audio and motion data collected from 12 pairs (24 participants) totaling 210 hours (420 hour.person) with fine-grained label. We tested multiple benchmark models on three tasks related to sleep event detection, demonstrating the usability and unique challenge of DreamCatcher. We hope that the proposed Dream-Catcher can inspire other researchers to further explore efficient wearer-aware human vocal activity sensing on earables.
U-Time: A Fully Convolutional Network for Time Series Segmentation Applied to Sleep Staging
Mathias Perslev, Michael Jensen, Sune Darkner, Poul Jรธrgen Jennum, Christian Igel
Neural networks are becoming more and more popular for the analysis of physiological time-series. The most successful deep learning systems in this domain combine convolutional and recurrent layers to extract useful features to model temporal relations. Unfortunately, these recurrent models are difficult to tune and optimize. In our experience, they often require task-specific modifications, which makes them challenging to use for non-experts. We propose U-Time, a fully feed-forward deep learning approach to physiological time series segmentation developed for the analysis of sleep data.
Appendix - Manifold GPLVMs for discovering non-Euclidean latent structure in neural data
To highlight the importance of unsupervised non-Euclidean learning methods in neuroscience and to illustrate the interpretability of the learned GP parameters, we consider a dataset from Peyrache et al. (2015b) recorded from the mouse anterodorsal thalamic nucleus (ADn; Figure 5a). This data has also been analyzed in Peyrache et al. (2015a), Chaudhuri et al. (2019) and Rubin et al. (2019). We consider the same example session shown in Figure 2 of Chaudhuri et al. (2019) (Mouse 28, session 140313) and bin spike counts in 500 ms time bins for analysis with mGPLVM. However, in contrast to the data considered in Section 3.1 and Section 3.2, this mouse dataset contains neurons with more heterogeneous baseline activities and tuning properties. This is reflected in the learned GP parameters which converge to small kernel length scales for neurons that contribute to the heading representation (Figure 5c, 'tuned') and large length scales for those that do not (Figure 5c, 'not tuned'). Finally, since mGPLVM does not require knowledge of behaviour, we also fitted mGPLVM to data recorded from the same neurons during a period of rapid eye movement (REM) sleep.
PixleepFlow: A Pixel-Based Lifelog Framework for Predicting Sleep Quality and Stress Level
Na, Younghoon, Oh, Seunghun, Ko, Seongji, Lee, Hyunkyung
The analysis of lifelogs can yield valuable insights into an individual's daily life, particularly with regard to their health and well-being. The accurate assessment of quality of life is necessitated by the use of diverse sensors and precise synchronization. To rectify this issue, this study proposes the image-based sleep quality and stress level estimation flow (PixleepFlow). PixleepFlow employs a conversion methodology into composite image data to examine sleep patterns and their impact on overall health. Experiments were conducted using lifelog datasets to ascertain the optimal combination of data formats. In addition, we identified which sensor information has the greatest influence on the quality of life through Explainable Artificial Intelligence(XAI). As a result, PixleepFlow produced more significant results than various data formats. This study was part of a written-based competition, and the additional findings from the lifelog dataset are detailed in Section Section IV. More information about PixleepFlow can be found at https://github.com/seongjiko/Pixleep.
SleepGMUformer: A gated multimodal temporal neural network for sleep staging
Zhao, Chenjun, Niu, Xuesen, Yu, Xinglin, Chen, Long, Lv, Na, Zhou, Huiyu, Zhao, Aite
Sleep staging is a central aspect of sleep assessment and research the accuracy of sleep staging is not only relevant to the assessment of sleep quality [3] but also key to achieving early intervention for sleep disorders and related psychiatric disorders [4]. Polysomnography is a multi-parameter study of sleep [5], a test to diagnose sleep disorders through different types of physiological signals recorded during sleep, such as electroencephalography (EEG), cardiography (CG), electrooculography (EOG), electromyography (EMG), oro-nasal airflow and oxygen saturation [6]. According to the Rechtschaffen and Kales (R&K) rule, PSG signals are usually divided into 30-second segments and classified into six sleep stages, namely wakefulness (Wake), four non-rapid eye movement stages (i.e., S1, S2, S3, and S4), and rapid eye movement (REM). In 2007, the American Academy of Sleep Medicine (AASM) adopted the Rechtschaffen & Kales (R&K) sleep staging system for Non-Rapid Eye Movement (NREM) sleep. Sleep specialists typically utilize these criteria for the manual classification of sleep stages, a process that is not only labor-intensive but also prone to subjective bias [7]. Therefore, automated sleep staging is a more efficient alternative to manual methods and has more clinical value [8].
Multimodal Sleep Stage and Sleep Apnea Classification Using Vision Transformer: A Multitask Explainable Learning Approach
Kazemi, Kianoosh, Azimi, Iman, Khine, Michelle, Khayat, Rami N., Rahmani, Amir M., Liljeberg, Pasi
Sleep is an essential component of human physiology, contributing significantly to overall health and quality of life. Accurate sleep staging and disorder detection are crucial for assessing sleep quality. Studies in the literature have proposed PSG-based approaches and machine-learning methods utilizing single-modality signals. However, existing methods often lack multimodal, multilabel frameworks and address sleep stages and disorders classification separately. In this paper, we propose a 1D-Vision Transformer for simultaneous classification of sleep stages and sleep disorders. Our method exploits the sleep disorders' correlation with specific sleep stage patterns and performs a simultaneous identification of a sleep stage and sleep disorder. The model is trained and tested using multimodal-multilabel sensory data (including photoplethysmogram, respiratory flow, and respiratory effort signals). The proposed method shows an overall accuracy (cohen's Kappa) of 78% (0.66) for five-stage sleep classification and 74% (0.58) for sleep apnea classification. Moreover, we analyzed the encoder attention weights to clarify our models' predictions and investigate the influence different features have on the models' outputs. The result shows that identified patterns, such as respiratory troughs and peaks, make a higher contribution to the final classification process.
sDREAMER: Self-distilled Mixture-of-Modality-Experts Transformer for Automatic Sleep Staging
Chen, Jingyuan, Yao, Yuan, Anderson, Mie, Hauglund, Natalie, Kjaerby, Celia, Untiet, Verena, Nedergaard, Maiken, Luo, Jiebo
Automatic sleep staging based on electroencephalography (EEG) and electromyography (EMG) signals is an important aspect of sleep-related research. Current sleep staging methods suffer from two major drawbacks. First, there are limited information interactions between modalities in the existing methods. Second, current methods do not develop unified models that can handle different sources of input. To address these issues, we propose a novel sleep stage scoring model sDREAMER, which emphasizes cross-modality interaction and per-channel performance. Specifically, we develop a mixture-of-modality-expert (MoME) model with three pathways for EEG, EMG, and mixed signals with partially shared weights. We further propose a self-distillation training scheme for further information interaction across modalities. Our model is trained with multi-channel inputs and can make classifications on either single-channel or multi-channel inputs. Experiments demonstrate that our model outperforms the existing transformer-based sleep scoring methods for multi-channel inference. For single-channel inference, our model also outperforms the transformer-based models trained with single-channel signals.