Collaborating Authors


The rise of AI in medicine


By now, it's almost old news that artificial intelligence (AI) will have a transformative role in medicine. Algorithms have the potential to work tirelessly, at faster rates and now with potentially greater accuracy than clinicians. In 2016, it was predicted that'machine learning will displace much of the work of radiologists and anatomical pathologists'. In the same year, a University of Toronto professor controversially announced that'we should stop training radiologists now'. But is it really the beginning of the end for some medical specialties?

How to Apply Supervised Machine Learning Tools to MS Imaging Files: Case Study with Cancer Spheroids Undergoing Treatment with the Monoclonal Antibody Cetuximab


As the field of mass spectrometry imaging continues to grow, so too do its needs for optimal methods of data analysis. One general need in image analysis is the ability to classify the underlying regions within an image, as healthy or diseased, for example. Classification, as a general problem, is often best accomplished by supervised machine learning strategies; unfortunately, conducting supervised machine learning on MS imaging files is not typically done by mass spectrometrists because a high degree of specialized knowledge is needed. To address this problem, we developed a fully open-source approach that facilitates supervised machine learning on MS imaging files, and we demonstrated its implementation on sets of cancer spheroids that either had or had not undergone chemotherapy treatment. These supervised machine learning studies demonstrated that metabolic changes induced by the monoclonal antibody, Cetuximab, are detectable but modest at 24 h, and by 72 h, the drug induces a larger and more diverse metabolic response.

New application of machine learning and image analysis to help distinguish a rare subtype of kidney cancer: US researchers collaborate with scientist quarantined in China during COVID-19 outbreak


Despite those obstacles, Indiana University School of Medicine faculty and Regenstrief Institute research scientists had their research published in Nature Communications on April 14, which is an even more significant feat considering one of the leading authors has been quarantined in Wuhan, China for the last two months of their work. The team consists of Affiliated Scientist Jie Zhang, PhD, Regenstrief Institute Research Scientist Kun Huang, PhD, both Indiana University School of Medicine faculty members, Jun Cheng, PhD, of Shenzhen University and colleagues including Liang Cheng, M.D. of IU School of Medicine. The study was led by Dr. Zhang, an assistant professor of medical and molecular genetics at IU School of Medicine. The work focuses on the application of machine learning and image analysis to help researchers distinguish a rare subtype of kidney cancer (translocational renal cell carcinoma, or tRCC) from other subtypes by examining the features of cells and tissues on a microscopic level. Dr. Zhang said the structural similarities have caused a high rate of misdiagnosis.

Immunai wants to map the entire immune system and raised $20 million in seed funding to do it – TechCrunch


For the past two years the founding team of Immunai had been working stealthily to develop a new technology to map the immune system of any patient. Founded by Noam Solomon, a Harvard and MIT-educated postdoctoral researcher, and former Palantir engineer, Luis Voloch, Immunai was born from the two men's interest in computational biology and systems engineering. When the two were introduced to Ansuman Satpathy, a professor of cancer immunology at Stanford University, and Danny Wells, who works as a data scientist at the Parker Institute for Cancer Immunotherapy the path forward for the company became clear. "Together we said we bring the understanding of all the technology and machine learning that needs to be brought into the work and Ansu and Danny bring the single-cell biology," said Solomon. Now as the company unveils itself and the $20 million in financing it has received from investors including Viola Ventures and TLV Partners, it's going to be making a hiring push and expanding its already robust research and development activities.

Automatically Assessing Quality of Online Health Articles Machine Learning

The information ecosystem today is overwhelmed by an unprecedented quantity of data on versatile topics are with varied quality. However, the quality of information disseminated in the field of medicine has been questioned as the negative health consequences of health misinformation can be life-threatening. There is currently no generic automated tool for evaluating the quality of online health information spanned over a broad range. To address this gap, in this paper, we applied a data mining approach to automatically assess the quality of online health articles based on 10 quality criteria. We have prepared a labeled dataset with 53012 features and applied different feature selection methods to identify the best feature subset with which our trained classifier achieved an accuracy of 84%-90% varied over 10 criteria. Our semantic analysis of features shows the underpinning associations between the selected features & assessment criteria and further rationalize our assessment approach. Our findings will help in identifying high-quality health articles and thus aiding users in shaping their opinion to make the right choice while picking health-related help from online.

Health State Estimation Artificial Intelligence

Life's most valuable asset is health. Continuously understanding the state of our health and modeling how it evolves is essential if we wish to improve it. Given the opportunity that people live with more data about their life today than any other time in history, the challenge rests in interweaving this data with the growing body of knowledge to compute and model the health state of an individual continually. This dissertation presents an approach to build a personal model and dynamically estimate the health state of an individual by fusing multi-modal data and domain knowledge. The system is stitched together from four essential abstraction elements: 1. the events in our life, 2. the layers of our biological systems (from molecular to an organism), 3. the functional utilities that arise from biological underpinnings, and 4. how we interact with these utilities in the reality of daily life. Connecting these four elements via graph network blocks forms the backbone by which we instantiate a digital twin of an individual. Edges and nodes in this graph structure are then regularly updated with learning techniques as data is continuously digested. Experiments demonstrate the use of dense and heterogeneous real-world data from a variety of personal and environmental sensors to monitor individual cardiovascular health state. State estimation and individual modeling is the fundamental basis to depart from disease-oriented approaches to a total health continuum paradigm. Precision in predicting health requires understanding state trajectory. By encasing this estimation within a navigational approach, a systematic guidance framework can plan actions to transition a current state towards a desired one. This work concludes by presenting this framework of combining the health state and personal graph model to perpetually plan and assist us in living life towards our goals.

Globalizing the AI Revolution in Health Care by Dominik Ruettinger


MUNICH – We are entering a transformational period in medical science, as traditional research techniques combine with massive computing power and a wealth of new data. Just recently, Google announced that it has developed an artificial intelligence (AI) system capable of outperforming human radiologists in detecting breast cancer. And that is merely the latest example of how machine learning and big data are leading to new medical diagnostics, treatments, and discoveries. To realize AI's enormous potential, however, we must develop a pragmatic and globally agreed approach to governing the collection and use of "real-world data." Like climate change, the COVID-19 pandemic is a perfect example of why we need multilateralism in a globalized world.

A Hierarchy of Limitations in Machine Learning Machine Learning

There is little argument about whether or not machine learning models are useful for applying to social systems. But if we take seriously George Box's dictum, or indeed the even older one that "the map is not the territory' (Korzybski, 1933), then there has been comparatively less systematic attention paid within the field to how machine learning models are wrong (Selbst et al., 2019) and seeing possible harms in that light. By "wrong" I do not mean in terms of making misclassifications, or even fitting over the'wrong' class of functions, but more fundamental mathematical/statistical assumptions, philosophical (in the sense used by Abbott, 1988) commitments about how we represent the world, and sociological processes of how models interact with target phenomena. This paper takes a particular model of machine learning research or application: one that its creators and deployers think provides a reliable way of interacting with the social world (whether that is through understanding, or in making predictions) without any intent to cause harm (McQuillan, 2018) and, in fact, a desire to not cause harm and instead improve the world, 1 for example as most explicitly in the various "{Data [Science], Machine Learning, Artificial Intelligence} for [Social] Good" initiatives, and more widely in framings around "fairness" or "ethics." I focus on the almost entirely statistical modern version of machine learning, rather than eclipsed older visions (see section 3). While many of the limitations I discuss apply to the use of machine learning in any domain, I focus on applications to the social world in order to explore the domain where limitations are strongest and stickiest.

Hunting for New Drugs with AI


THERE ARE MANY REASONS that promising drugs wash out during pharmaceutical development, and one of them is cytochrome P450. A set of enzymes mostly produced in the liver, CYP450, as it is commonly called, is involved in breaking down chemicals and preventing them from building up to dangerous levels in the bloodstream. Many experimental drugs, it turns out, inhibit the production of CYP450--a vexing side effect that can render such a drug toxic in humans. Drug companies have long relied on conventional tools to try to predict whether a drug candidate will inhibit CYP450 in patients, such as by conducting chemical analyses in test tubes, looking at CYP450 interactions with better-understood drugs that have chemical similarities, and running tests on mice. But their predictions are wrong about a third of the time.

6 expert essays on the future of biotech


What exactly is biotechnology, and how could it change our approach to human health? As the age of big data transforms the potential of this emerging field, members of the World Economic Forum's Global Future Council on Biotechnology tell you everything you need to know. What if your doctor could predict your heart attack before you had it – and prevent it? Or what if we could cure a child's cancer by exploiting the bacteria in their gut? These types of biotechnology solutions aimed at improving human health are already being explored. As more and more data (so called "big data") is available across disparate domains such as electronic health records, genomics, metabolomics, and even life-style information, further insights and opportunities for biotechnology will become apparent. However, to achieve the maximal potential both technical and ethical issues will need to be addressed. As we look to the future, let's first revisit previous examples of where combining data with scientific understanding has led to new health solutions. Biotechnology is a rapidly changing field that continues to transform both in scope and impact. Karl Ereky first coined the term biotechnology in 1919.