Siegel, Scott
Leveraging Computer Vision in the Intensive Care Unit (ICU) for Examining Visitation and Mobility
Siegel, Scott, Zhang, Jiaqing, Bandyopadhyay, Sabyasachi, Nerella, Subhash, Silva, Brandon, Baslanti, Tezcan, Bihorac, Azra, Rashidi, Parisa
Despite the importance of closely monitoring patients in the Intensive Care Unit (ICU), many aspects are still assessed in a limited manner due to the time constraints imposed on healthcare providers. For example, although excessive visitations during rest hours can potentially exacerbate the risk of circadian rhythm disruption and delirium, it is not captured in the ICU. Likewise, while mobility can be an important indicator of recovery or deterioration in ICU patients, it is only captured sporadically or not captured at all. In the past few years, the computer vision field has found application in many domains by reducing the human burden. Using computer vision systems in the ICU can also potentially enable non-existing assessments or enhance the frequency and accuracy of existing assessments while reducing the staff workload. In this study, we leverage a state-of-the-art noninvasive computer vision system based on depth imaging to characterize ICU visitations and patients' mobility. We then examine the relationship between visitation and several patient outcomes, such as pain, acuity, and delirium. We found an association between deteriorating patient acuity and the incidence of delirium with increased visitations. In contrast, self-reported pain, reported using the Defense and Veteran Pain Rating Scale (DVPRS), was correlated with decreased visitations. Our findings highlight the feasibility and potential of using noninvasive autonomous systems to monitor ICU patients.
Transformers in Healthcare: A Survey
Nerella, Subhash, Bandyopadhyay, Sabyasachi, Zhang, Jiaqing, Contreras, Miguel, Siegel, Scott, Bumin, Aysegul, Silva, Brandon, Sena, Jessica, Shickel, Benjamin, Bihorac, Azra, Khezeli, Kia, Rashidi, Parisa
In contrast, transformers employ a "Scaled Dot-Product Attention" mechanism that is parallelizable. This unique attention mechanism allows for large-scale pretraining. Additionally, self-supervised pretraining paradigm such as masked language modeling onlarge unlabeled datasets enabled transformers to be trained without costly annotations. Transformer model, although originally designed for the NLP [3] domain, Transformers have witnessed adaptations in various domains such as computer vision [5, 6], remote sensing [7], time series [8], speech processing [9] and multimodal learning [10]. Consequently, modality specific surveys emerged, focusing on medical imaging [11-13] and biomedical language models [14] in the medical domain. This paper aims to provide comprehensive overview of Transformer models utilized across multiple modalities of data to address healthcare objectives. We discuss pre-training strategies to manage the lack of robust and annotated healthcare datasets. The rest of the paper is organized as follows: Section 2 discusses the strategy to search for relevant citations; Section 3 describes the architecture of the original transformer; Section 4 describes the two primary Transformer variants: the Bidirectional Encoder Representations from Transformers (BERT) and the Vision Transformer (ViT). Section 5 describes advancements in large language models (LLM), and section 6 through 12 provides a review of Transformers in healthcare.
AI-Enhanced Intensive Care Unit: Revolutionizing Patient Care with Pervasive Sensing
Nerella, Subhash, Guan, Ziyuan, Siegel, Scott, Zhang, Jiaqing, Khezeli, Kia, Bihorac, Azra, Rashidi, Parisa
The intensive care unit (ICU) is a specialized hospital space where critically ill patients receive intensive care and monitoring. Comprehensive monitoring is imperative in assessing patients conditions, in particular acuity, and ultimately the quality of care. However, the extent of patient monitoring in the ICU is limited due to time constraints and the workload on healthcare providers. Currently, visual assessments for acuity, including fine details such as facial expressions, posture, and mobility, are sporadically captured, or not captured at all. These manual observations are subjective to the individual, prone to documentation errors, and overburden care providers with the additional workload. Artificial Intelligence (AI) enabled systems has the potential to augment the patient visual monitoring and assessment due to their exceptional learning capabilities. Such systems require robust annotated data to train. To this end, we have developed pervasive sensing and data processing system which collects data from multiple modalities depth images, color RGB images, accelerometry, electromyography, sound pressure, and light levels in ICU for developing intelligent monitoring systems for continuous and granular acuity, delirium risk, pain, and mobility assessment. This paper presents the Intelligent Intensive Care Unit (I2CU) system architecture we developed for real-time patient monitoring and visual assessment.
Automatic Detection and Classification of Cognitive Distortions in Mental Health Text
Shickel, Benjamin, Siegel, Scott, Heesacker, Martin, Benton, Sherry, Rashidi, Parisa
-- In cognitive psychology, automatic and self-reinforcing irrational thought patterns are known as cognitive distortions. Left unchecked, patients exhibiting these types of thoughts can become stuck in negative feedback loops of unhealthy thinking, leading to inaccurate perceptions of reality commonly associated with anxiety and depression. In this paper, we present a machine learning framework for the automatic detection and classification of 15 common cognitive distortions in two novel mental health free text datasets collected from both crowdsourcing and a real-world online therapy program. When differentiating between distorted and non-distorted passages, our model achieved a weighted F1 score of 0.88. For classifying distorted passages into one of 15 distortion categories, our model yielded weighted F1 scores of 0.68 in the larger crowdsourced dataset and 0.45 in the smaller online counseling dataset, both of which outperformed random baseline metrics by a large margin. For both tasks, we also identified the most discriminative words and phrases between classes to highlight common thematic elements for improving targeted and therapist-guided mental health treatment. Furthermore, we performed an exploratory analysis using unsupervised content-based clustering and topic modeling algorithms as first efforts towards a data-driven perspective on the thematic relationship between similar cognitive distortions traditionally deemed unique. Finally, we highlight the difficulties in applying mental health-based machine learning in a real-world setting and comment on the implications and benefits of our framework for improving automated delivery of therapeutic treatment in conjunction with traditional cognitive-behavioral therapy. CCORDING to the National Institute of Mental Health, anxiety disorders affect more than 18% of the U.S. adult population every year [1]. Additionally, the National Survey of Drug Use and Health reports that 6.7% of the U.S. adult population experienced at least one major depressive disorder episode in the past year [2]. This work was supported by NSF-IIP 1631871 from the National Science Foundation (NSF), Division of Industrial Innovation and Partnerships (IIP). Rashidi are with the University of Florida, Gainesville, FL 32611 USA (email: shickelb@ufl.edu; S. Benton is with T AO Connect, Inc., St. Petersburg, FL 33701 USA (email: sherry .benton@taoconnect.org).
The Intelligent ICU Pilot Study: Using Artificial Intelligence Technology for Autonomous Patient Monitoring
Davoudi, Anis, Malhotra, Kumar Rohit, Shickel, Benjamin, Siegel, Scott, Williams, Seth, Ruppert, Matthew, Bihorac, Emel, Ozrazgat-Baslanti, Tezcan, Tighe, Patrick J., Bihorac, Azra, Rashidi, Parisa
Currently, many critical care indices are repetitively assessed and recorded by overburdened nurses, e.g. physical function or facial pain expressions of nonverbal patients. In addition, many essential information on patients and their environment are not captured at all, or are captured in a non-granular manner, e.g. sleep disturbance factors such as bright light, loud background noise, or excessive visitations. In this pilot study, we examined the feasibility of using pervasive sensing technology and artificial intelligence for autonomous and granular monitoring of critically ill patients and their environment in the Intensive Care Unit (ICU). As an exemplar prevalent condition, we also characterized delirious and non-delirious patients and their environment. We used wearable sensors, light and sound sensors, and a high-resolution camera to collected data on patients and their environment. We analyzed collected data using deep learning and statistical analysis. Our system performed face detection, face recognition, facial action unit detection, head pose detection, facial expression recognition, posture recognition, actigraphy analysis, sound pressure and light level detection, and visitation frequency detection. We were able to detect patient's face (Mean average precision (mAP)=0.94), recognize patient's face (mAP=0.80), and their postures (F1=0.94). We also found that all facial expressions, 11 activity features, visitation frequency during the day, visitation frequency during the night, light levels, and sound pressure levels during the night were significantly different between delirious and non-delirious patients (p-value<0.05). In summary, we showed that granular and autonomous monitoring of critically ill patients and their environment is feasible and can be used for characterizing critical care conditions and related environment factors.