Goto

Collaborating Authors

blood pressure


KNNImputer

#artificialintelligence

The idea in kNN methods is to identify'k' samples in the dataset that are similar or close in the space. Then we use these'k' samples to estimate the value of the missing data points. Each sample's missing values are imputed using the mean value of the'k'-neighbors found in the dataset. Let's look at an example to understand this. Consider a pair of observations in a two-dimensional space (2,0), (2,2), (3,3).


How Do We Study Artificial Intelligence in Healthcare Effectively?

#artificialintelligence

In a recent article in JAMA, Derek C. Angus writes about the Hypotension Prediction During Surgery (HYPE) trial, one of the first randomized, controlled trials of an artificial intelligence (AI) intervention. Angus discusses how this type of study provides good evidence for actually using a particular, specific AI intervention--but he highlights the limitations of this type of research, too. Although Angus focuses on an AI intervention for hypotension, the principles he explores are relevant to mental health as well. Mental health apps using AI technology are already being used, despite the current lack of evidence for improved outcomes. Also, ethical concerns have been raised about the use of AI in all medical contexts.


Samsung promotes AI expert to lead research arm

ZDNet

Dr Sebastian Seung will head Samsung Research. Samsung Electronics has promoted one of its key artificial intelligence (AI) experts to head its research unit. Dr Sebastian Seung will be the new head of Samsung Research, the company said on Wednesday. He will oversee research conducted at Samsung's fifteen global research and development centres and seven AI centres, which are spread out across 13 different countries, Samsung said. Seung, an Evnin professor in the Neuroscience Institute and Department of Computer Science at Princeton University, joined Samsung back in 2018 and has been Samsung Research's chief research scientist since then.


COVID-19 Hangover -- Part II

#artificialintelligence

In part I of the blog I wrote about the most acute problem our society faces today - the Climate Crisis, and how we can leverage the pandemic-caused lockdown to analyze the consequences as data points for the "what-if" scenario to make better decisions in the future. While Climate Crisis should be addressed timely and aggressively, COVID-19 posed a health crisis for the governments to deal with the projection of 80% of the population being infected in the short term. How will health systems manage the prevention, diagnostics, and treatment of the pandemic in parallel to provide the ongoing services and treatments? In this part, I will present a few developments in telemedicine, personalized medicine and drug development powered by AI/ML and how they better equipped us in this fight and could be used routinely in the future. Telemedicine is a buzzword we used to hear in the context of highly populated countries with a lack of trained personnel trying to bridge the supply and demand with remote resourcing.


G-Net: A Deep Learning Approach to G-computation for Counterfactual Outcome Prediction Under Dynamic Treatment Regimes

arXiv.org Machine Learning

Counterfactual prediction is a fundamental task in decision-making. G-computation is a method for estimating expected counterfactual outcomes under dynamic time-varying treatment strategies. Existing G-computation implementations have mostly employed classical regression models with limited capacity to capture complex temporal and nonlinear dependence structures. This paper introduces G-Net, a novel sequential deep learning framework for G-computation that can handle complex time series data while imposing minimal modeling assumptions and provide estimates of individual or population-level time varying treatment effects. We evaluate alternative G-Net implementations using realistically complex temporal simulated data obtained from CVSim, a mechanistic model of the cardiovascular system.


A new AI 'Super Nurse' monitors patients in Israeli hospital

#artificialintelligence

Able to monitor multiple patients in separate rooms simultaneously; staying on top of their blood pressure, pulse and vital signs; and spotting signs of deterioration even before the patients feel it themselves. This medical superhero is not human, but rather a product of artificial intelligence, advanced software algorithms, sensors and cameras. And it's being assembled right now at Tel Aviv Sourasky Medical Center. The creation of an AI-powered "super nurse" is the result of a decade of steady work by Ahuva Weiss-Meilik and her team in the hospital's I-Medata center. "Our doctors and nurses can't be everywhere," Weiss-Meilik tells ISRAEL21c.


Machine learning and clinical insights: building the best model

#artificialintelligence

At HIMSS20 next month, two machine learning experts will show how machine learning algorithms are evolving to handle complex physiological data and drive more detailed clinical insights. During surgery and other critical care procedures, continuous monitoring of blood pressure to detect and avoid the onset of arterial hypotension is crucial. New machine learning technology developed by Edwards Lifesciences has proven to be an effective means of doing this. In the prodromal stage of hemodynamic instability, which is characterized by subtle, complex changes in different physiologic variables unique dynamic arterial waveform "signatures" are formed, which require machine learning and complex feature extraction techniques to be utilized. Feras Hatib, director of research and development for algorithms and signal processing at Edwards Lifesciences, explained his team developed a technology that could predict, in real-time and continuously, upcoming hypotension in acute-care patients, using an arterial pressure waveforms.


Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions

arXiv.org Machine Learning

Off-policy evaluation in reinforcement learning offers the chance of using observational data to improve future outcomes in domains such as healthcare and education, but safe deployment in high stakes settings requires ways of assessing its validity. Traditional measures such as confidence intervals may be insufficient due to noise, limited data and confounding. In this paper we develop a method that could serve as a hybrid human-AI system, to enable human experts to analyze the validity of policy evaluation estimates. This is accomplished by highlighting observations in the data whose removal will have a large effect on the OPE estimate, and formulating a set of rules for choosing which ones to present to domain experts for validation. We develop methods to compute exactly the influence functions for fitted Q-evaluation with two different function classes: kernel-based and linear least squares. Experiments on medical simulations and real-world intensive care unit data demonstrate that our method can be used to identify limitations in the evaluation process and make evaluation more robust.


A Preliminary Approach for Learning Relational Policies for the Management of Critically Ill Children

arXiv.org Artificial Intelligence

The increased use of electronic health records has made possible the automated extraction of medical policies from patient records to aid in the development of clinical decision support systems. We adapted a boosted Statistical Relational Learning (SRL) framework to learn probabilistic rules from clinical hospital records for the management of physiologic parameters of children with severe cardiac or respiratory failure who were managed with extracorporeal membrane oxygenation. In this preliminary study, the results were promising. In particular, the algorithm returned logic rules for medical actions that are consistent with medical reasoning.


r/MachineLearning - [D] Decision Tree Splitting strategy

#artificialintelligence

I have a dataset with 4 categorical features (Cholesterol, Systolic Blood pressure, diastolic blood pressure, and smoking rate). I use a decision tree classifier to find the probability of stroke. I am trying to verify my understanding of the splitting procedure done by Python Sklearn. Since it is a binary tree, there are three possible ways to split the first feature which is either to group categories {0 and 1 to a leaf, 2 to another leaf} or {0 and 2, 1}, or {0, 1 and 2}. What I know (please correct me here) is that the chosen split is the one with the highest information gain.