IBM Watson aligns with 16 health systems and imaging firms to apply cognitive computing to battle cancer, diabetes, heart disease

#artificialintelligence

IBM Watson Health has formed a medical imaging collaborative with more than 15 leading healthcare organizations. The goal: To take on some of the most deadly diseases. The collaborative, which includes health systems, academic medical centers, ambulatory radiology providers and imaging technology companies, aims to help doctors address breast, lung, and other cancers; diabetes; eye health; brain disease; and heart disease and related conditions, such as stroke. Watson will mine insights from what IBM calls previously invisible unstructured imaging data and combine it with a broad variety of data from other sources, such as data from electronic health records, radiology and pathology reports, lab results, doctors' progress notes, medical journals, clinical care guidelines and published outcomes studies. As the work of the collaborative evolves, Watson's rationale and insights will evolve, informed by the latest combined thinking of the participating organizations.


Solving the Empirical Bayes Normal Means Problem with Correlated Noise

arXiv.org Machine Learning

The Normal Means problem plays a fundamental role in many areas of modern high-dimensional statistics, both in theory and practice. And the Empirical Bayes (EB) approach to solving this problem has been shown to be highly effective, again both in theory and practice. However, almost all EB treatments of the Normal Means problem assume that the observations are independent. In practice correlations are ubiquitous in real-world applications, and these correlations can grossly distort EB estimates. Here, exploiting theory from Schwartzman (2010), we develop new EB methods for solving the Normal Means problem that take account of unknown correlations among observations. We provide practical software implementations of these methods, and illustrate them in the context of large-scale multiple testing problems and False Discovery Rate (FDR) control. In realistic numerical experiments our methods compare favorably with other commonly-used multiple testing methods.


Defining Explanation in Probabilistic Systems

arXiv.org Artificial Intelligence

As probabilistic systems gain popularity and are coming into wider use, the need for a mechanism that explains the system's findings and recommendations becomes more critical. The system will also need a mechanism for ordering competing explanations. We examine two representative approaches to explanation in the literature - one due to G\"ardenfors and one due to Pearl - and show that both suffer from significant problems. We propose an approach to defining a notion of "better explanation" that combines some of the features of both together with more recent work by Pearl and others on causality.


Deep Learning: Not Just for Silicon Valley · fast.ai

#artificialintelligence

Recent American news events range from horrifying to dystopian, but reading the applications of our fast.ai I was blown away by how many bright, creative, resourceful folks from all over the world are applying deep learning to tackle a variety of meaningful and interesting problems. Their passions range from ending illegal logging, diagnosing malaria in rural Uganda, translating Japanese manga, reducing farmer suicides in India via better loans, making Nigerian fashion recommendations, monitoring patients with Parkinson's disease, and more. Our mission at fast.ai is to make deep learning accessible to people from varied backgrounds outside of elite institutions, who are tackling problems in meaningful but low-resource areas, far from mainstream deep learning research. Our group of selected fellows for Deep Learning Part 2 includes people from Nigeria, Ivory Coast, South Africa, Pakistan, Bangladesh, India, Singapore, Israel, Canada, Spain, Germany, France, Poland, Russia, and Turkey.


Causal Inference through a Witness Protection Program

arXiv.org Machine Learning

One of the most fundamental problems in causal inference is the estimation of a causal effect when variables are confounded. This is difficult in an observational study, because one has no direct evidence that all confounders have been adjusted for. We introduce a novel approach for estimating causal effects that exploits observational conditional independencies to suggest "weak" paths in a unknown causal graph. The widely used faithfulness condition of Spirtes et al. is relaxed to allow for varying degrees of "path cancellations" that imply conditional independencies but do not rule out the existence of confounding causal paths. The outcome is a posterior distribution over bounds on the average causal effect via a linear programming approach and Bayesian inference. We claim this approach should be used in regular practice along with other default tools in observational studies.