Rashidi, Parisa
Improved Predictive Models for Acute Kidney Injury with IDEAs: Intraoperative Data Embedded Analytics
Adhikari, Lasith, Ozrazgat-Baslanti, Tezcan, Thottakkara, Paul, Ebadi, Ashkan, Motaei, Amir, Rashidi, Parisa, Li, Xiaolin, Bihorac, Azra
Acute kidney injury (AKI) is a common and serious complication after a surgery which is associated with morbidity and mortality. The majority of existing perioperative AKI risk score prediction models are limited in their generalizability and do not fully utilize the physiological intraoperative time-series data. Thus, there is a need for intelligent, accurate, and robust systems, able to leverage information from large-scale data to predict patient's risk of developing postoperative AKI. A retrospective single-center cohort of 2,911 adult patients who underwent surgery at the University of Florida Health has been used for this study. We used machine learning and statistical analysis techniques to develop perioperative models to predict the risk of AKI (risk during the first 3 days, 7 days, and until the discharge day) before and after the surgery. In particular, we examined the improvement in risk prediction by incorporating three intraoperative physiologic time series data, i.e., mean arterial blood pressure, minimum alveolar concentration, and heart rate. For an individual patient, the preoperative model produces a probabilistic AKI risk score, which will be enriched by integrating intraoperative statistical features through a machine learning stacking approach inside a random forest classifier. We compared the performance of our model based on the area under the receiver operating characteristics curve (AUROC), accuracy and net reclassification improvement (NRI). The predictive performance of the proposed model is better than the preoperative data only model. For AKI-7day outcome: The AUC was 0.86 (accuracy was 0.78) in the proposed model, while the preoperative AUC was 0.84 (accuracy 0.76). Furthermore, with the integration of intraoperative features, we were able to classify patients who were misclassified in the preoperative model.
The Intelligent ICU Pilot Study: Using Artificial Intelligence Technology for Autonomous Patient Monitoring
Davoudi, Anis, Malhotra, Kumar Rohit, Shickel, Benjamin, Siegel, Scott, Williams, Seth, Ruppert, Matthew, Bihorac, Emel, Ozrazgat-Baslanti, Tezcan, Tighe, Patrick J., Bihorac, Azra, Rashidi, Parisa
Currently, many critical care indices are repetitively assessed and recorded by overburdened nurses, e.g. physical function or facial pain expressions of nonverbal patients. In addition, many essential information on patients and their environment are not captured at all, or are captured in a non-granular manner, e.g. sleep disturbance factors such as bright light, loud background noise, or excessive visitations. In this pilot study, we examined the feasibility of using pervasive sensing technology and artificial intelligence for autonomous and granular monitoring of critically ill patients and their environment in the Intensive Care Unit (ICU). As an exemplar prevalent condition, we also characterized delirious and non-delirious patients and their environment. We used wearable sensors, light and sound sensors, and a high-resolution camera to collected data on patients and their environment. We analyzed collected data using deep learning and statistical analysis. Our system performed face detection, face recognition, facial action unit detection, head pose detection, facial expression recognition, posture recognition, actigraphy analysis, sound pressure and light level detection, and visitation frequency detection. We were able to detect patient's face (Mean average precision (mAP)=0.94), recognize patient's face (mAP=0.80), and their postures (F1=0.94). We also found that all facial expressions, 11 activity features, visitation frequency during the day, visitation frequency during the night, light levels, and sound pressure levels during the night were significantly different between delirious and non-delirious patients (p-value<0.05). In summary, we showed that granular and autonomous monitoring of critically ill patients and their environment is feasible and can be used for characterizing critical care conditions and related environment factors.
DeepSOFA: A Real-Time Continuous Acuity Score Framework using Deep Learning
Shickel, Benjamin, Loftus, Tyler J., Ozrazgat-Baslanti, Tezcan, Ebadi, Ashkan, Bihorac, Azra, Rashidi, Parisa
Traditional methods for assessing illness severity and predicting in-hospital mortality among critically ill patients require manual, time-consuming, and error-prone calculations that are further hindered by the use of static variable thresholds derived from aggregate patient populations. These coarse frameworks do not capture time-sensitive individual physiological patterns and are not suitable for instantaneous assessment of patients' acuity trajectories, a critical task for the ICU where conditions often change rapidly. Furthermore, they are ill-suited to capitalize on the emerging availability of streaming electronic health record data. We propose a novel acuity score framework (DeepSOFA) that leverages temporal patient measurements in conjunction with deep learning models to make accurate assessments of a patient's illness severity at any point during their ICU stay. We compare DeepSOFA with SOFA baseline models using the same predictors and find that at any point during an ICU admission, DeepSOFA yields more accurate predictions of in-hospital mortality.
Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for Electronic Health Record (EHR) Analysis
Shickel, Benjamin, Tighe, Patrick, Bihorac, Azra, Rashidi, Parisa
The past decade has seen an explosion in the amount of digital information stored in electronic health records (EHR). While primarily designed for archiving patient clinical information and administrative healthcare tasks, many researchers have found secondary use of these records for various clinical informatics tasks. Over the same period, the machine learning community has seen widespread advances in deep learning techniques, which also have been successfully applied to the vast amount of EHR data. In this paper, we review these deep EHR systems, examining architectures, technical aspects, and clinical applications. We also identify shortcomings of current techniques and discuss avenues of future research for EHR-based deep learning.
ART: An Availability-Aware Active Learning Framework for Data Streams
Shickel, Benjamin (University of Florida) | Rashidi, Parisa (University of Florida)
Active learning, a technique in which a learner self-selects the most important unlabeled examples to be labeled by a human expert, is a useful approach when labeled training data is either scarce or expensive to obtain. While active learning has been well-documented in the offline pool-based setting, less attention has been paid to applying active learning in an online streaming setting. In this paper, we introduce a novel generic framework called ART (Availability-aware active leaRning in data sTreams). We examine the multiple-oracle active learning environment and present a novel method for querying multiple imperfect oracles based on dynamic availability schedules. We introduce a flexible availability-based definition of labeling budget for data streams, and present a mechanism to automatically adapt to implicit changes in oracle availability based on past oracle behavior. Compared to the baseline approaches, our results indicate improvements in accuracy and query utility using our availability-based multiple oracle framework.
Reports on the 2012 AAAI Fall Symposium Series
Dogan, Rezarta Islamaj (National Library of Medicine) | Gil, Yolanda (University of Southern California) | Hirsh, Haym (Rutgers University) | Krishnan, Narayanan C. (Washington State University) | Lewis, Michael (University of Pittsburgh) | Mericli, Cetin (Carnegie Mellon University) | Rashidi, Parisa (Northwestern University) | Raskin, Victor (Purdue University) | Swarup, Samarth (Virginia Institute of Technology) | Sun, Wei (George Mason University) | Taylor, Julia M. (National Library of Medicine) | Yeganova, Lana
The Association for the Advancement of Artificial Intelligence was pleased to present the 2012 Fall Symposium Series, held Friday through Sunday, November 2–4, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the eight symposia were as follows: AI for Gerontechnology (FS-12-01), Artificial Intelligence of Humor (FS-12-02), Discovery Informatics: The Role of AI Research in Innovating Scientific Processes (FS-12-03), Human Control of Bio-Inspired Swarms (FS-12-04), Information Retrieval and Knowledge Discovery in Biomedical Text (FS-12-05), Machine Aggregation of Human Judgment (FS-12-06), Robots Learning Interactively from Human Teachers (FS-12-07), and Social Networks and Social Contagion (FS-12-08). The highlights of each symposium are presented in this report.
Reports on the 2012 AAAI Fall Symposium Series
Dogan, Rezarta Islamaj (National Library of Medicine) | Gil, Yolanda (University of Southern California) | Hirsh, Haym (Rutgers University) | Krishnan, Narayanan C. (Washington State University) | Lewis, Michael (University of Pittsburgh) | Mericli, Cetin (Carnegie Mellon University) | Rashidi, Parisa (Northwestern University) | Raskin, Victor (Purdue University) | Swarup, Samarth (Virginia Institute of Technology) | Sun, Wei (George Mason University) | Taylor, Julia M. (National Library of Medicine) | Yeganova, Lana
The Association for the Advancement of Artificial Intelligence was pleased to present the 2012 Fall Symposium Series, held Friday through Sunday, November 2–4, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the eight symposia were as follows: AI for Gerontechnology (FS-12-01), Artificial Intelligence of Humor (FS-12-02), Discovery Informatics: The Role of AI Research in Innovating Scientific Processes (FS-12-03), Human Control of Bio-Inspired Swarms (FS-12-04), Information Retrieval and Knowledge Discovery in Biomedical Text (FS-12-05), Machine Aggregation of Human Judgment (FS-12-06), Robots Learning Interactively from Human Teachers (FS-12-07), and Social Networks and Social Contagion (FS-12-08). The highlights of each symposium are presented in this report.
Preface
Cook, Diane J. (Washington State University) | Krishnan, Narayanan C. (Washington State University) | Rashidi, Parisa (University of Florida) | Skubic, Marjorie (University of Missouri-Columbia) | Mihailidis, Alex (University of Toronto)
The aging population, the increasing cost of formal health care, caregiver burden and the importance that older adults place on living independently in their own homes motivate the need for the development of patient-centric technologies that promote safe independent living. These patient-centric technologies need to address various aging related physical and cognitive health problems such as heart disease, diabetes, deterioration of physical function, falling, wandering, strokes, and memory problems, lack of medication adher- ence, cognitive decline and loneliness. Advances in the sensor and computing technology that allow for ambient unobtrusive and continuous home monitoring have opened new vistas for the development of such technologies.
Activity Recognition Based on Home to Home Transfer Learning
Rashidi, Parisa (Washington State University) | Cook, Diane J. (Washington State University)
Activity recognition plays an important role in many areas such as smart environments by offering unprecedented opportunities for assisted living, automation, security and energy efficiency. It’s also an essential component for planning and plan recognition in smart environments. One challenge of activity recognition is the need for collecting and annotating huge amounts of data for each new physical setting in order to be able to carry out the conventional activity discovery and recognition algorithms. This extensive initial phase of data collection and annotation results in a prolonged installation process and excessive time investment for each new space. In this paper we propose a new method of transferring learned knowledge of activities to a new physical space in order to leverage the learning process in the new environment. Our method called ”Home to Home Transfer Learning” (HHTL) is based on using a semi EM framework and modeling activities using structural, temporal and spatial features. This method allows us to avoid the tedious task of collecting and labeling huge amounts of data in the target space, and allows for a more accelerated and more scalable deployment cycle in the real world. It also allows us to exploit the insights learned in previous spaces. To validate our algorithms, we use the data collected in several smart apartments with different physical layouts.