A YOUNG MAN, let's call him Roger, arrives at the emergency department complaining of belly pain and nausea. A physical exam reveals that the pain is focused in the lower right portion of his abdomen. The doctor worries that it could be appendicitis. But by the time the imaging results come back, Roger is feeling better, and the scan shows that his appendix appears normal. The doctor turns to the computer to prescribe two medications, one for nausea and Tylenol for pain, before discharging him. This is one of the fictitious scenarios presented to 55 physicians around the country as part of a study to look at the usability of electronic health records (EHRs).
Predictive analytics, artificial intelligence, machine learning, personalization, consumer-centric services, enhanced security and telehealth all will affect the delivery and business of healthcare in big ways in 2020, according to five health IT experts from GetWellNetwork, a digital health company that focuses on the patient experience and patient engagement. Healthcare IT News interviewed the CEO, CSO, CISO, CTO and vice president of strategy at GetWellNetwork to get their perspectives on where health IT is headed this year. Their answers ran the gamut, and are good indicators for where healthcare provider organization CIOs and other provider IT leaders need to keep their eyes on. In 2020, predictive guidance will enhance patient workflows, leading clinicians to increasingly deliver the right modality of treatment, adjust treatment recommendations as needed and triage patients to the right location throughout their care journey, whether it is the ER, urgent care or an at-home video consultation, said Robin Cavanaugh, chief technology officer at GetWellNetwork. "Additionally, predictive analytics will guide patient care by suggesting additional healthcare services that similar patients have utilized, augmenting treatment protocols with healthy living suggestions and curating information to resources that may be helpful after treatment," he added.
Cerner was interviewing Silicon Valley giants to pick a storage provider for 250 million health records, one of the largest collections of U.S. patient data. Google dispatched former chief executive Eric Schmidt to personally pitch Cerner over several phone calls and offered around $250 million in discounts and incentives, people familiar with the matter say. Google had a bigger goal in pushing for the deal than dollars and cents: a way to expand its effort to collect, analyze and aggregate health data on millions of Americans. Google representatives were vague in answering questions about how Cerner's data would be used, making the health-care company's executives wary, the people say. Eventually, Cerner struck a storage deal with Amazon.com The failed Cerner deal reveals an emerging challenge to Google's move into health care: gaining the trust of health care partners and the public.
Identifying patterns from the neuroimaging recordings of brain activity related to the unobservable psychological or mental state of an individual can be treated as a unsupervised pattern recognition problem. The main challenges, however, for such an analysis of fMRI data are: a) defining a physiologically meaningful feature-space for representing the spatial patterns across time; b) dealing with the high-dimensionality of the data; and c) robustness to the various artifacts and confounds in the fMRI time-series. In this paper, we present a network-aware feature-space to represent the states of a general network, that enables comparing and clustering such states in a manner that is a) meaningful in terms of the network connectivity structure; b)computationally efficient; c) low-dimensional; and d) relatively robust to structured and random noise artifacts. This feature-space is obtained from a spherical relaxation of the transportation distance metric which measures the cost of transporting mass'' over the network to transform one function into another. Through theoretical and empirical assessments, we demonstrate the accuracy and efficiency of the approximation, especially for large problems.
Electronic health records provide a rich source of data for machine learning methods to learn dynamic treatment responses over time. However, any direct estimation is hampered by the presence of time-dependent confounding, where actions taken are dependent on time-varying variables related to the outcome of interest. Drawing inspiration from marginal structural models, a class of methods in epidemiology which use propensity weighting to adjust for time-dependent confounders, we introduce the Recurrent Marginal Structural Network - a sequence-to-sequence architecture for forecasting a patient's expected response to a series of planned treatments. Papers published at the Neural Information Processing Systems Conference.
Despite their impressive performance, Deep Neural Networks (DNNs) typically underperform Gradient Boosting Trees (GBTs) on many tabular-dataset learning tasks. We propose that applying a different regularization coefficient to each weight might boost the performance of DNNs by allowing them to make more use of the more relevant inputs. However, this will lead to an intractable number of hyperparameters. Here, we introduce Regularization Learning Networks (RLNs), which overcome this challenge by introducing an efficient hyperparameter tuning scheme which minimizes a new Counterfactual Loss. Our results show that RLNs significantly improve DNNs on tabular datasets, and achieve comparable results to GBTs, with the best performance achieved with an ensemble that combines GBTs and RLNs.
Precision medicine is an evolving healthcare approach focused on tailoring medical decisions, treatments, practices and products to individual patients based on genetic, environmental, lifestyle and other factors, i.e., delivering the right treatment to the right patient at the right time. Artificial intelligence (AI) refers to intelligence demonstrated by machines or computers that mimic cognitive func-tions that humans associate with the human mind, such as learning, interpreting, composing and prob-lem solving. Fueled by advances in computational power, theoretical understanding, and ever-increasing amount of data, the last decade has witnessed widespread applications of AI in every major field of human society, including medicine and healthcare. Broadly speaking, AI can help to realize the promise of preci-sion medicine and modernize healthcare in three major areas: (1) disease prevention, (2) personalized diagnosis, and (3) personalized treatment. This Research Topic is intended to present some of the state-of-the-art developments of artificial intelligence in precision medicine in recent years as well as practical considerations in applying AI in the modern clinics.
We construct a biologically motivated stochastic differential model of the neural and hemodynamic activity underlying the observed Blood Oxygen Level Dependent (BOLD) signal in Functional Magnetic Resonance Imaging (fMRI). The model poses a difficult parameter estimation problem, both theoretically due to the nonlinearity and divergence of the differential system, and computationally due to its time and space complexity. We adapt a particle filter and smoother to the task, and discuss some of the practical approaches used to tackle the difficulties, including use of sparse matrices and parallelisation. Results demonstrate the tractability of the approach in its application to an effective connectivity study. Papers published at the Neural Information Processing Systems Conference.
Functional Magnetic Resonance Imaging (fMRI) provides an unprecedented window into the complex functioning of the human brain, typically detailing the activity of thousands of voxels during hundreds of sequential time points. Unfortunately, the interpretation of fMRI is complicated due both to the relatively unknown connection between the hemodynamic response and neural activity and the unknown spatiotemporal characteristics of the cognitive patterns themselves. Here, we use data from the Experience Based Cognition competition to compare global and local methods of prediction applying both linear and nonlinear techniques of dimensionality reduction. We build global low dimensional representations of an fMRI dataset, using linear and nonlinear methods. We learn a set of time series that are implicit functions of the fMRI data, and predict the values of these times series in the future from the knowledge of the fMRI data only.
We propose a method for reconstruction of human brain states directly from functional neuroimaging data. The method extends the traditional multivariate regression analysis of discretized fMRI data to the domain of stochastic functional measurements, facilitating evaluation of brain responses to naturalistic stimuli and boosting the power of functional imaging. Population based incremental learning is used to search for spatially distributed voxel clusters, taking into account the variation in Haemodynamic lag across brain areas and among subjects by voxel-wise non-linear registration of stimuli to fMRI data. The method captures spatially distributed brain responses to naturalistic stimuli without attempting to localize function. Application of the method for prediction of naturalistic stimuli from new and unknown fMRI data shows that the approach is capable of identifying distributed clusters of brain locations that are highly predictive of a specific stimuli.