Goto

Collaborating Authors

Results


Unique AI method for generating proteins to speed up drug development

#artificialintelligence

"What we are now able to demonstrate offers fantastic potential for a number of future applications, such as faster and more cost-efficient development of protein-based drugs," says Aleksej Zelezniak, Associate Professor at the Department of Biology and Biological Engineering at Chalmers. Proteins are large, complex molecules that play a crucial role in all living cells, building, modifying, and breaking down other molecules naturally inside our cells. They are also widely used in industrial processes and products, and in our daily lives. Protein-based drugs are very common--the diabetes drug insulin is one of the most prescribed. Some of the most expensive and effective cancer medicines are also protein-based, as well as the antibody formulas currently being used to treat COVID-19.


Social determinants of health in the era of artificial intelligence with electronic health records: A systematic review

arXiv.org Artificial Intelligence

There is growing evidence showing the significant role of social determinant of health (SDOH) on a wide variety of health outcomes. In the era of artificial intelligence (AI), electronic health records (EHRs) have been widely used to conduct observational studies. However, how to make the best of SDOH information from EHRs is yet to be studied. In this paper, we systematically reviewed recently published papers and provided a methodology review of AI methods using the SDOH information in EHR data. A total of 1250 articles were retrieved from the literature between 2010 and 2020, and 74 papers were included in this review after abstract and full-text screening. We summarized these papers in terms of general characteristics (including publication years, venues, countries etc.), SDOH types, disease areas, study outcomes, AI methods to extract SDOH from EHRs and AI methods using SDOH for healthcare outcomes. Finally, we conclude this paper with discussion on the current trends, challenges, and future directions on using SDOH from EHRs.


Machine Learning Towards Intelligent Systems: Applications, Challenges, and Opportunities

arXiv.org Artificial Intelligence

The emergence and continued reliance on the Internet and related technologies has resulted in the generation of large amounts of data that can be made available for analyses. However, humans do not possess the cognitive capabilities to understand such large amounts of data. Machine learning (ML) provides a mechanism for humans to process large amounts of data, gain insights about the behavior of the data, and make more informed decision based on the resulting analysis. ML has applications in various fields. This review focuses on some of the fields and applications such as education, healthcare, network security, banking and finance, and social media. Within these fields, there are multiple unique challenges that exist. However, ML can provide solutions to these challenges, as well as create further research opportunities. Accordingly, this work surveys some of the challenges facing the aforementioned fields and presents some of the previous literature works that tackled them. Moreover, it suggests several research opportunities that benefit from the use of ML to address these challenges.


Probabilistic Machine Learning for Healthcare

arXiv.org Machine Learning

Machine learning can be used to make sense of healthcare data. Probabilistic machine learning models help provide a complete picture of observed data in healthcare. In this review, we examine how probabilistic machine learning can advance healthcare. We consider challenges in the predictive model building pipeline where probabilistic models can be beneficial including calibration and missing data. Beyond predictive models, we also investigate the utility of probabilistic machine learning models in phenotyping, in generative models for clinical use cases, and in reinforcement learning.


Epidemiologically and Socio-economically Optimal Policies via Bayesian Optimization

arXiv.org Machine Learning

Mass public quarantining, colloquially known as a lock-down, is a non-pharmaceutical intervention to check spread of disease. This paper presents ESOP (Epidemiologically and Socio-economically Optimal Policies), a novel application of active machine learning techniques using Bayesian optimization, that interacts with an epidemiological model to arrive at lock-down schedules that optimally balance public health benefits and socio-economic downsides of reduced economic activity during lock-down periods. The utility of ESOP is demonstrated using case studies with VIPER (Virus-Individual-Policy-EnviRonment), a stochastic agent-based simulator that this paper also proposes. However, ESOP is flexible enough to interact with arbitrary epidemiological simulators in a black-box manner, and produce schedules that involve multiple phases of lock-downs.


Patient Similarity Analysis with Longitudinal Health Data

arXiv.org Machine Learning

Healthcare professionals have long envisioned using the enormous processing powers of computers to discover new facts and medical knowledge locked inside electronic health records. These vast medical archives contain time-resolved information about medical visits, tests and procedures, as well as outcomes, which together form individual patient journeys. By assessing the similarities among these journeys, it is possible to uncover clusters of common disease trajectories with shared health outcomes. The assignment of patient journeys to specific clusters may in turn serve as the basis for personalized outcome prediction and treatment selection. This procedure is a non-trivial computational problem, as it requires the comparison of patient data with multi-dimensional and multi-modal features that are captured at different times and resolutions. In this review, we provide a comprehensive overview of the tools and methods that are used in patient similarity analysis with longitudinal data and discuss its potential for improving clinical decision making.


Deep Bayesian Gaussian Processes for Uncertainty Estimation in Electronic Health Records

arXiv.org Machine Learning

One major impediment to the wider use of deep learning for clinical decision making is the difficulty of assigning a level of confidence to model predictions. Currently, deep Bayesian neural networks and sparse Gaussian processes are the main two scalable uncertainty estimation methods. However, deep Bayesian neural network suffers from lack of expressiveness, and more expressive models such as deep kernel learning, which is an extension of sparse Gaussian process, captures only the uncertainty from the higher level latent space. Therefore, the deep learning model under it lacks interpretability and ignores uncertainty from the raw data. In this paper, we merge features of the deep Bayesian learning framework with deep kernel learning to leverage the strengths of both methods for more comprehensive uncertainty estimation. Through a series of experiments on predicting the first incidence of heart failure, diabetes and depression applied to large-scale electronic medical records, we demonstrate that our method is better at capturing uncertainty than both Gaussian processes and deep Bayesian neural networks in terms of indicating data insufficiency and distinguishing true positive and false positive predictions, with a comparable generalisation performance. Furthermore, by assessing the accuracy and area under the receiver operating characteristic curve over the predictive probability, we show that our method is less susceptible to making overconfident predictions, especially for the minority class in imbalanced datasets. Finally, we demonstrate how uncertainty information derived by the model can inform risk factor analysis towards model interpretability.


6 expert essays on the future of biotech

#artificialintelligence

What exactly is biotechnology, and how could it change our approach to human health? As the age of big data transforms the potential of this emerging field, members of the World Economic Forum's Global Future Council on Biotechnology tell you everything you need to know. What if your doctor could predict your heart attack before you had it – and prevent it? Or what if we could cure a child's cancer by exploiting the bacteria in their gut? These types of biotechnology solutions aimed at improving human health are already being explored. As more and more data (so called "big data") is available across disparate domains such as electronic health records, genomics, metabolomics, and even life-style information, further insights and opportunities for biotechnology will become apparent. However, to achieve the maximal potential both technical and ethical issues will need to be addressed. As we look to the future, let's first revisit previous examples of where combining data with scientific understanding has led to new health solutions. Biotechnology is a rapidly changing field that continues to transform both in scope and impact. Karl Ereky first coined the term biotechnology in 1919.


Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy

arXiv.org Artificial Intelligence

Being Optimistic to Be Conservative: Quickly Learning a CV aR Policy Ramtin Keramati 1, Christoph Dann 2, Alex T amkin 3, Emma Brunskill 3 1 Institute of Computational and Mathematical Engineering (ICME), Stanford University, California, USA 2 Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA 3 Department of Computer Science, Stanford University, California, USA {keramati,atamkin,ebrun } @cs.stanford.edu Abstract While maximizing expected return is the goal in most reinforcement learning approaches, risk-sensitive objectives such as conditional value at risk (CV aR) are more suitable for many high-stakes applications. However, relatively little is known about how to explore to quickly learn policies with good CV aR. In this paper, we present the first algorithm for sample-efficient learning of CV aR-optimal policies in Markov decision processes based on the optimism in the face of uncertainty principle. This method relies on a novel optimistic version of the distributional Bellman operator that moves probability mass from the lower to the upper tail of the return distribution. We prove asymptotic convergence and optimism of this operator for the tabular policy evaluation case. We further demonstrate that our algorithm finds CV aR-optimal policies substantially faster than existing baselines in several simulated environments with discrete and continuous state spaces. Introduction A key goal in reinforcement learning (RL) is to quickly learn to make good decisions by interacting with an environment. In most cases the quality of the decision policy is evaluated with respect to its expected (discounted) sum of rewards. However, in many interesting cases, it is important to consider the full distributions over the potential sum of rewards, and the desired objective may be a risk-sensitive measure of this distribution. For example, a patient undergoing a surgery for a knee replacement will (hopefully) only experience that procedure once or twice, and may will be interested in the distribution of potential results for a single procedure, rather than what may happen on average if he or she were to undertake that procedure hundreds of time. Finance and (machine) control are other cases where interest in risk-sensitive outcomes are common. A popular risk-sensitive measure of a distribution of outcomes is the Conditional V alue at Risk (CV aR) (Artzner et al. 1999). Intuitively, CV aR is the expected reward in the worst α -fraction of outcomes, and has seen extensive use in financial portfolio optimization (Zhu and Fukushima 2009), often under the name "expected shortfall".


Reinforcement Learning in Healthcare: A Survey

arXiv.org Artificial Intelligence

As a subfield of machine learning, \emph{reinforcement learning} (RL) aims at empowering one's capabilities in behavioural decision making by using interaction experience with the world and an evaluative feedback. Unlike traditional supervised learning methods that usually rely on one-shot, exhaustive and supervised reward signals, RL tackles with sequential decision making problems with sampled, evaluative and delayed feedback simultaneously. Such distinctive features make RL technique a suitable candidate for developing powerful solutions in a variety of healthcare domains, where diagnosing decisions or treatment regimes are usually characterized by a prolonged and sequential procedure. This survey will discuss the broad applications of RL techniques in healthcare domains, in order to provide the research community with systematic understanding of theoretical foundations, enabling methods and techniques, existing challenges, and new insights of this emerging paradigm. By first briefly examining theoretical foundations and key techniques in RL research from efficient and representational directions, we then provide an overview of RL applications in a variety of healthcare domains, ranging from dynamic treatment regimes in chronic diseases and critical care, automated medical diagnosis from both unstructured and structured clinical data, as well as many other control or scheduling domains that have infiltrated many aspects of a healthcare system. Finally, we summarize the challenges and open issues in current research, and point out some potential solutions and directions for future research.