Goto

Collaborating Authors

Results


Drowning in Data

#artificialintelligence

In 1945 the volume of human knowledge doubled every 25 years. Now, that number is 12 hours [1]. With our collective computational power rapidly increasing, vast amounts of data and our ability to assimilate it, has seeded unprecedented fertile ground for innovation. Healthtech companies are rapidly sprouting from data ridden soil at exponential rates. Cell free DNA companies, once a rarity, are becoming ubiquitous. The genomics landscape, once dominated by the few, are being inundated by a slew of competitors. Grandiose claims of being able to diagnose 50 different cancers from a single blood sample, or use AI to best dermatologists, radiologists, pathologists, etc., are being made at alarming rates. Accordingly, it's imperative to know how to assess these claims as fact or fiction, particularly when such claimants may employ "statistical misdirection". In this addition to "The Insider's Guide to Translational Medicine" we disarm perpetrators of statistical warfare of their greatest ...


Smarter health: How AI is transforming health care

#artificialintelligence

This is the first episode in our series Smarter health. American health care is complex. In the first episode in our series Smarter health, we explore the potential of AI in health care -- from predicting patient risk, to diagnostics, to just helping physicians make better decisions. Today, On Point: We consider whether AI's potential can be realized in our financially-motivated health care system. Welcome to an On Point special series: Smarter health: Artificial intelligence and the future of American health care. In the not so distant future, artificial intelligence and machine learning technologies could transform the health care you receive, whether you're aware of it or not. Here are just a couple of examples. Dr. Vindell Washington is chief clinical officer at Verily Life Sciences, which is owned by Google's parent company, Alphabet. Washington oversees the development of Onduo. Technology that weaves together multiple streams of complex, daily medical data in order to guide and personalize health care decisions across entire patient populations. VINDELL WASHINGTON [Tape]: You might have a blood pressure cuff reading, you may have a blood sugar reading, you may have some logging that you've done.


A Sensor Sniffs for Cancer, Using Artificial Intelligence

#artificialintelligence

Researchers at Memorial Sloan Kettering Cancer Center (MSK) have developed a sensor that can be trained to sniff for cancer, with the help of artificial intelligence. Although the training doesn't work the same way one trains a police dog to sniff for explosives or drugs, the sensor has some similarity to how the nose works. The nose can detect more than a trillion different scents, even though it has just a few hundred types of olfactory receptors. The pattern of which odor molecules bind to which receptors creates a kind of molecular signature that the brain uses to recognize a scent. Like the nose, the cancer detection technology uses an array of multiple sensors to detect a molecular signature of the disease.


How A.I. Is Finding New Cures in Old Drugs

#artificialintelligence

In the elegant quiet of the café at the Church of Sweden, a narrow Gothic-style building in Midtown Manhattan, Daniel Cohen is taking a break from explaining genetics. He moves toward the creaky piano positioned near the front door, sits down, and plays a flowing, flawless rendition of "Over the Rainbow." If human biology is the scientific equivalent of a complicated score, Cohen has learned how to navigate it like a virtuoso. Cohen was the driving force behind Généthon, the French laboratory that in December 1993 produced the first-ever "map" of the human genome. He essentially introduced Big Data and automation to the study of genomics, as he and his team demonstrated for the first time that it was possible to use super-fast computing to speed up the processing of DNA samples.


AI In Healthcare Highlights & Milestones 2021

#artificialintelligence

In 2021 the application of AI enabled advances in many areas of healthcare. We made significant progress in AI for drug discovery, medical imaging, diagnostics, pathology, and clinical trials. Important peer reviewed papers were published and dozens of partnerships were formed. Big Pharma companies and major tech companies became very active in the space. Record amounts of funding were raised, and a few companies even started human clinical trials. Microsoft and NVIDIA launched two of the world's most powerful supercomputers and Microsoft announced Azure OpenAI Service. In 2022 we expect these technologies to converge across the healthcare spectrum. This article summarizes milestones achieved in 2021. This is the first in a series of progress reports I'm writing on the sector that will be supplemented by industry performance data and metrics compiled in partnership with Alliance for Artificial Intelligence in Healthcare (AAIH) and other top tier resources.


A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks

arXiv.org Machine Learning

To fit sparse linear associations, a LASSO sparsity inducing penalty with a single hyperparameter provably allows to recover the important features (needles) with high probability in certain regimes even if the sample size is smaller than the dimension of the input vector (haystack). More recently learners known as artificial neural networks (ANN) have shown great successes in many machine learning tasks, in particular fitting nonlinear associations. Small learning rate, stochastic gradient descent algorithm and large training set help to cope with the explosion in the number of parameters present in deep neural networks. Yet few ANN learners have been developed and studied to find needles in nonlinear haystacks. Driven by a single hyperparameter, our ANN learner, like for sparse linear associations, exhibits a phase transition in the probability of retrieving the needles, which we do not observe with other ANN learners. To select our penalty parameter, we generalize the universal threshold of Donoho and Johnstone (1994) which is a better rule than the conservative (too many false detections) and expensive cross-validation. In the spirit of simulated annealing, we propose a warm-start sparsity inducing algorithm to solve the high-dimensional, non-convex and non-differentiable optimization problem. We perform precise Monte Carlo simulations to show the effectiveness of our approach.


Cooperative learning for multi-view analysis

arXiv.org Machine Learning

With new technologies in biomedicine, we are able to generate and collect data of various modalities, including genomics, epigenomics, transcriptomics, and proteomics (Figure 1A). Integrating heterogeneous features on a single set of observations provides a unique opportunity to gain a comprehensive understanding of an outcome of interest. It offers the potential for making discoveries that are hidden in data analyses of a single modality and achieving more accurate predictions of the outcome (Kristensen et al. 2014, Ritchie et al. 2015, Gligorijević et al. 2016, Karczewski & Snyder 2018, Ma et al. 2020). While "multi-view data analysis" can mean different things, we use it here in the context of supervised learning, where the goal is to fuse different data views to model an outcome of interest. To give a concrete example, assume that a researcher wants to predict cancer outcomes from RNA expression and DNA methylation measurements for a set of patients. The researcher suspects that: (1) both data views could potentially have prognostic value; (2) the two views share some underlying relationship with each other, as DNA methylation regulates gene expression and can repress the expression of tumor suppressor genes or promote the expression of oncogenes. Should the researcher use both data views for downstream prediction, or just use one view or the other?


Challenges of Artificial Intelligence -- From Machine Learning and Computer Vision to Emotional Intelligence

arXiv.org Artificial Intelligence

Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.


KerGNNs: Interpretable Graph Neural Networks with Graph Kernels

arXiv.org Artificial Intelligence

Graph kernels are historically the most widely-used technique for graph classification tasks. However, these methods suffer from limited performance because of the hand-crafted combinatorial features of graphs. In recent years, graph neural networks (GNNs) have become the state-of-the-art method in downstream graph-related tasks due to their superior performance. Most GNNs are based on Message Passing Neural Network (MPNN) frameworks. However, recent studies show that MPNNs can not exceed the power of the Weisfeiler-Lehman (WL) algorithm in graph isomorphism test. To address the limitations of existing graph kernel and GNN methods, in this paper, we propose a novel GNN framework, termed \textit{Kernel Graph Neural Networks} (KerGNNs), which integrates graph kernels into the message passing process of GNNs. Inspired by convolution filters in convolutional neural networks (CNNs), KerGNNs adopt trainable hidden graphs as graph filters which are combined with subgraphs to update node embeddings using graph kernels. In addition, we show that MPNNs can be viewed as special cases of KerGNNs. We apply KerGNNs to multiple graph-related tasks and use cross-validation to make fair comparisons with benchmarks. We show that our method achieves competitive performance compared with existing state-of-the-art methods, demonstrating the potential to increase the representation ability of GNNs. We also show that the trained graph filters in KerGNNs can reveal the local graph structures of the dataset, which significantly improves the model interpretability compared with conventional GNN models.


Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies

arXiv.org Artificial Intelligence

As AI systems demonstrate increasingly strong predictive performance, their adoption has grown in numerous domains. However, in high-stakes domains such as criminal justice and healthcare, full automation is often not desirable due to safety, ethical, and legal concerns, yet fully manual approaches can be inaccurate and time consuming. As a result, there is growing interest in the research community to augment human decision making with AI assistance. Besides developing AI technologies for this purpose, the emerging field of human-AI decision making must embrace empirical approaches to form a foundational understanding of how humans interact and work with AI to make decisions. To invite and help structure research efforts towards a science of understanding and improving human-AI decision making, we survey recent literature of empirical human-subject studies on this topic. We summarize the study design choices made in over 100 papers in three important aspects: (1) decision tasks, (2) AI models and AI assistance elements, and (3) evaluation metrics. For each aspect, we summarize current trends, discuss gaps in current practices of the field, and make a list of recommendations for future research. Our survey highlights the need to develop common frameworks to account for the design and research spaces of human-AI decision making, so that researchers can make rigorous choices in study design, and the research community can build on each other's work and produce generalizable scientific knowledge. We also hope this survey will serve as a bridge for HCI and AI communities to work together to mutually shape the empirical science and computational technologies for human-AI decision making.