We are developing an autonomous mobile assistive robot named El-E to help individuals with severe motor impairments by performing various object manipulation tasks such as fetching, transporting, placing, and delivering. El-E can autonomously approach a location specified by the user through an interface such as a standard laser pointer and pick up a nearby object. The initial target user population of the robot is individuals suffering from amyotrophic lateral sclerosis (ALS). ALS, also known as Lou Gehrig's disease, is a progressive neuro-degenerative disease resulting in motor impairments throughout the entire body. Due to the severity and progressive nature of ALS, the results from developing robotic technologies to assist ALS patients could be applied to wider motor impaired populations. To accomplish successful development and real world application of assistive robot technology, we have to acquire familiarity with the needs and everyday living conditions of these individuals. We also believe the participation of prospective users throughout the design and development process is essential in improving the usability and accessibility of the robot for the target user population. To assess the needs of prospective users and to evaluate the technology being developed, we applied various methodologies of human studies including interviewing, photographing, and conducting controlled experiments. We present an overview of research from the Healthcare Robotics Lab related to patient needs assessment and human experiments with emphasis on the methods of human centered approach.
Chromosomal conformations, topologically associated chromatin domains (TADs) assembling in nested fashion across hundreds of kilobases, and other "three-dimensional genome" (3DG) structures bypass the linear genome on a kilo- or megabase scale and play an important role in transcriptional regulation. Most of the genetic variants associated with risk for schizophrenia (SZ) are common and could be located in enhancers, repressors, and other regulatory elements that influence gene expression; however, the role of the brain's 3DG for SZ genetic risk architecture, including developmental and cell type–specific regulation, remains poorly understood. We monitored changes in 3DG after isogenic differentiation of human induced pluripotent stem cell–derived neural progenitor cells (NPCs) into neurons or astrocyte-like glial cells on a genome-wide scale using Hi-C. With this in vitro model of brain development, we mapped cell type–specific chromosomal conformations associated with SZ risk loci and defined a risk-associated expanded genome space. Neural differentiation was associated with genome-wide 3DG remodeling, including pruning and de novo formations of chromosomal loopings. The NPC-to-neuron transition was defined by the pruning of loops involving regulators of cell proliferation, morphogenesis, and neurogenesis, which is consistent with a departure from a precursor stage toward postmitotic neuronal identity. Loops lost during NPC-to-glia transition included many genes associated with neuron-specific functions, which is consistent with non-neuronal lineage commitment. However, neurons together with NPCs, as compared with glia, harbored a much larger number of chromosomal interactions anchored in common variant sequences associated with SZ risk. Because spatial 3DG proximity of genes is an indicator for potential coregulation, we tested whether the neural cell type–specific SZ-related "chromosomal connectome" showed evidence of coordinated transcriptional regulation and proteomic interaction of the participating genes. To this end, we generated lists of genes anchored in cell type–specific SZ risk-associated interactions. Thus, for the NPC-specific interactions, we counted 386 genes, including 146 within the risk loci and another 240 genes positioned elsewhere in the linear genome but connected via intrachromosomal contacts to risk locus sequences.
The Normal Means problem plays a fundamental role in many areas of modern high-dimensional statistics, both in theory and practice. And the Empirical Bayes (EB) approach to solving this problem has been shown to be highly effective, again both in theory and practice. However, almost all EB treatments of the Normal Means problem assume that the observations are independent. In practice correlations are ubiquitous in real-world applications, and these correlations can grossly distort EB estimates. Here, exploiting theory from Schwartzman (2010), we develop new EB methods for solving the Normal Means problem that take account of unknown correlations among observations. We provide practical software implementations of these methods, and illustrate them in the context of large-scale multiple testing problems and False Discovery Rate (FDR) control. In realistic numerical experiments our methods compare favorably with other commonly-used multiple testing methods.
Many questions in Data Science are fundamentally causal in that our objective is to learn the effect of some exposure (randomized or not) on an outcome interest. Even studies that are seemingly non-causal (e.g. prediction or prevalence estimation) have causal elements, such as differential censoring or measurement. As a result, we, as Data Scientists, need to consider the underlying causal mechanisms that gave rise to the data, rather than simply the pattern or association observed in the data. In this work, we review the "Causal Roadmap", a formal framework to augment our traditional statistical analyses in an effort to answer the causal questions driving our research. Specific steps of the Roadmap include clearly stating the scientific question, defining of the causal model, translating the scientific question into a causal parameter, assessing the assumptions needed to translate the causal parameter into a statistical estimand, implementation of statistical estimators including parametric and semi-parametric methods, and interpretation of our findings. Throughout we focus on the effect of an exposure occurring at a single time point and provide extensions to more advanced settings.
Objective: Predict individual septic children's personalized physiologic responses to vasoactive titrations by training a Recurrent Neural Network (RNN) using EMR data. Materials and Methods: This study retrospectively analyzed EMR of patients admitted to a pediatric ICU from 2009 to 2017. Data included charted time series vitals, labs, drugs, and interventions of children with septic shock treated with dopamine, epinephrine, or norepinephrine. A RNN was trained to predict responses in heart rate (HR), systolic blood pressure (SBP), diastolic blood pressure (DBP) and mean arterial pressure (MAP) to 8,640 titrations during 652 septic episodes and evaluated on a holdout set of 3,883 titrations during 254 episodes. A linear regression model using titration data as its sole input was also developed and compared to the RNN model. Evaluation methods included the correlation coefficient between actual physiologic responses and RNN predictions, mean absolute error (MAE), and area under the receiver operating characteristic curve (AUC). Results: The actual physiologic responses displayed significant variability and were more accurately predicted by the RNN model than by titration alone (r=0.20 vs r=0.05, p<0.01). The RNN showed MAE and AUC improvements over the linear model. The RNN's MAEs associated with dopamine and epinephrine were 1-3% lower than the linear regression model MAE for HR, SBP, DBP, and MAP. Across all vitals vasoactives, the RNN achieved 1-19% AUC improvement over the linear model. Conclusion: This initial attempt in pediatric critical care to predict individual physiologic responses to vasoactive dose changes in children with septic shock demonstrated an RNN model showed some improvement over a linear model. While not yet clinically applicable, further development may assist clinical administration of vasoactive medications in children with septic shock.