Medtronic's mission is to alleviate pain, restore health, and extend life through the application of biomedical engineering, explains Elaine Gee, PhD, Senior Principal Algorithm Engineer specializing in Artificial Intelligence at Medtronic. It's a mission Gee is well equipped for. With over 15 years' experience in modeling, bioinformatics, and engineering, she drives machine learning algorithm development and analytics to support next-generation medical devices for diabetes management. On behalf of AI Trends, Ben Lakin, from Cambridge Innovation Institute, sat down with Gee to discuss her most recent focus: algorithm development related to glucose sensing to improve the accuracy and performance of continuous glucose monitoring devices, also known as CGMs. Editor's Note: Gee will be giving a featured presentation on Advancing Continuous Glucose Monitoring Sensor Development with Machine Learning at Sensors Summit in San Diego, December 10-12.
Researchers of the ICAI Group–Computational Intelligence and Image Analysis–of the University of Malaga (UMA) have designed an unprecedented method that is capable of improving brain images obtained through magnetic resonance imaging using artificial intelligence. This new model manages to increase image quality from low resolution to high resolution without distorting the patients' brain structures, using a deep learning artificial neural network –a model that is based on the functioning of the human brain–that "learns" this process. "Deep learning is based on very large neural networks, and so is its capacity to learn, reaching the complexity and abstraction of a brain," explains researcher Karl Thurnhofer, main author of this study, who adds that, thanks to this technique, the activity of identification can be performed alone, without supervision; an identification effort that the human eye would not be capable of doing. Published in the scientific journal "Neurocomputing," this study represents a scientific breakthrough, since the algorithm developed by the UMA yields more accurate results in less time, with clear benefits for patients. "So far, the acquisition of quality brain images has depended on the time the patient remained immobilized in the scanner; with our method, image processing is carried out later on the computer," explains Thurnhofer.
Artificial Intelligence (AI) has the capability to provide radiologists with tools to help improve their productivity and decision making, possibly leading to quicker diagnosis and improved patient outcomes. As evidenced by the great number of vendors entering the market, it is initially deploying as a diverse collection of assistive applications and tools. These are allowing radiologists to augment, quantify and stratify the information available to them and has the promise to provide major opportunities to enhance and augment the radiology reading and richness of the resulting reports. It is also improving access to medical record information with the goal of helping to give radiologists more time to think about what is going on with patients, diagnose more complex cases, collaborate with patient care teams, and perform more invasive procedures. Deep Learning algorithms in particular have promise to transform the foundation for decision making and workflow, as these types of algorithms have the ability to "learn" by example to execute a task as well as interpret new data.
A YOUNG MAN, let's call him Roger, arrives at the emergency department complaining of belly pain and nausea. A physical exam reveals that the pain is focused in the lower right portion of his abdomen. The doctor worries that it could be appendicitis. But by the time the imaging results come back, Roger is feeling better, and the scan shows that his appendix appears normal. The doctor turns to the computer to prescribe two medications, one for nausea and Tylenol for pain, before discharging him. This is one of the fictitious scenarios presented to 55 physicians around the country as part of a study to look at the usability of electronic health records (EHRs).
Predictive analytics, artificial intelligence, machine learning, personalization, consumer-centric services, enhanced security and telehealth all will affect the delivery and business of healthcare in big ways in 2020, according to five health IT experts from GetWellNetwork, a digital health company that focuses on the patient experience and patient engagement. Healthcare IT News interviewed the CEO, CSO, CISO, CTO and vice president of strategy at GetWellNetwork to get their perspectives on where health IT is headed this year. Their answers ran the gamut, and are good indicators for where healthcare provider organization CIOs and other provider IT leaders need to keep their eyes on. In 2020, predictive guidance will enhance patient workflows, leading clinicians to increasingly deliver the right modality of treatment, adjust treatment recommendations as needed and triage patients to the right location throughout their care journey, whether it is the ER, urgent care or an at-home video consultation, said Robin Cavanaugh, chief technology officer at GetWellNetwork. "Additionally, predictive analytics will guide patient care by suggesting additional healthcare services that similar patients have utilized, augmenting treatment protocols with healthy living suggestions and curating information to resources that may be helpful after treatment," he added.
Cerner was interviewing Silicon Valley giants to pick a storage provider for 250 million health records, one of the largest collections of U.S. patient data. Google dispatched former chief executive Eric Schmidt to personally pitch Cerner over several phone calls and offered around $250 million in discounts and incentives, people familiar with the matter say. Google had a bigger goal in pushing for the deal than dollars and cents: a way to expand its effort to collect, analyze and aggregate health data on millions of Americans. Google representatives were vague in answering questions about how Cerner's data would be used, making the health-care company's executives wary, the people say. Eventually, Cerner struck a storage deal with Amazon.com The failed Cerner deal reveals an emerging challenge to Google's move into health care: gaining the trust of health care partners and the public.
Identifying patterns from the neuroimaging recordings of brain activity related to the unobservable psychological or mental state of an individual can be treated as a unsupervised pattern recognition problem. The main challenges, however, for such an analysis of fMRI data are: a) defining a physiologically meaningful feature-space for representing the spatial patterns across time; b) dealing with the high-dimensionality of the data; and c) robustness to the various artifacts and confounds in the fMRI time-series. In this paper, we present a network-aware feature-space to represent the states of a general network, that enables comparing and clustering such states in a manner that is a) meaningful in terms of the network connectivity structure; b)computationally efficient; c) low-dimensional; and d) relatively robust to structured and random noise artifacts. This feature-space is obtained from a spherical relaxation of the transportation distance metric which measures the cost of transporting mass'' over the network to transform one function into another. Through theoretical and empirical assessments, we demonstrate the accuracy and efficiency of the approximation, especially for large problems.
Electronic health records provide a rich source of data for machine learning methods to learn dynamic treatment responses over time. However, any direct estimation is hampered by the presence of time-dependent confounding, where actions taken are dependent on time-varying variables related to the outcome of interest. Drawing inspiration from marginal structural models, a class of methods in epidemiology which use propensity weighting to adjust for time-dependent confounders, we introduce the Recurrent Marginal Structural Network - a sequence-to-sequence architecture for forecasting a patient's expected response to a series of planned treatments. Papers published at the Neural Information Processing Systems Conference.
Despite their impressive performance, Deep Neural Networks (DNNs) typically underperform Gradient Boosting Trees (GBTs) on many tabular-dataset learning tasks. We propose that applying a different regularization coefficient to each weight might boost the performance of DNNs by allowing them to make more use of the more relevant inputs. However, this will lead to an intractable number of hyperparameters. Here, we introduce Regularization Learning Networks (RLNs), which overcome this challenge by introducing an efficient hyperparameter tuning scheme which minimizes a new Counterfactual Loss. Our results show that RLNs significantly improve DNNs on tabular datasets, and achieve comparable results to GBTs, with the best performance achieved with an ensemble that combines GBTs and RLNs.