Collaborating Authors

Strength Medium

Get ready for your evil twin


We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Earlier this year a chilling academic study was published by researchers at Lancaster University and UC Berkeley. Using a sophisticated form of AI known as a GAN (Generative Adversarial Network) they created artificial human faces (i.e. They discovered that this type of AI technology has become so effective, we humans can no longer tell the difference between real people and virtual people (or "veeple" as I call them). You see, they also asked their test subjects to rate the "trustworthiness" of each face and discovered that consumers find AI-generated faces to be significantly more trustworthy than real faces.

Team Uses AI and Robotics to Treat Spinal Cord Injuries


A team of researchers at Rutgers University has employed artificial intelligence (AI) and robotics to formulate therapeutic proteins. The team was able to successfully stabilize an enzyme that can degrade scar tissues resulting from spinal cord injuries. It can also promote tissue regeneration.  The study was published in Advanced Healthcare Materials.  Stabilizing the Enzyme The […]

How artificial intelligence is changing drug discovery


An enormous figure looms over scientists searching for new drugs: the estimated US$2.6-billion price tag of developing a treatment. A lot of that effectively goes down the drain, because it includes money spent on the nine out of ten candidate therapies that fail somewhere between phase I trials and regulatory approval. Few people in the field doubt the need to do things differently. Leading biopharmaceutical companies believe a solution is at hand. Pfizer is using IBM Watson, a system that uses machine learning, to power its search for immuno-oncology drugs.

Predicting treatment effects from observational studies using machine learning methods: A simulation study Machine Learning

Measuring treatment effects in observational studies is challenging because of confounding bias. Confounding occurs when a variable affects both the treatment and the outcome. Traditional methods such as propensity score matching estimate treatment effects by conditioning on the confounders. Recent literature has presented new methods that use machine learning to predict the counterfactuals in observational studies which then allow for estimating treatment effects. These studies however, have been applied to real world data where the true treatment effects have not been known. This study aimed to study the effectiveness of this counterfactual prediction method by simulating two main scenarios: with and without confounding. Each type also included linear and non-linear relationships between input and output data. The key item in the simulations was that we generated known true causal effects. Linear regression, lasso regression and random forest models were used to predict the counterfactuals and treatment effects. These were compared these with the true treatment effect as well as a naive treatment effect. The results show that the most important factor in whether this machine learning method performs well, is the degree of non-linearity in the data. Surprisingly, for both non-confounding \textit{and} confounding, the machine learning models all performed well on the linear dataset. However, when non-linearity was introduced, the models performed very poorly. Therefore under the conditions of this simulation study, the machine learning method performs well under conditions of linearity, even if confounding is present, but at this stage should not be trusted when non-linearity is introduced.

Ghost in the machine or monkey with a typewriter--generating titles for Christmas research articles in The BMJ using artificial intelligence: observational study


Objective To determine whether artificial intelligence (AI) can generate plausible and engaging titles for potential Christmas research articles in The BMJ . Design Observational study. Setting Europe, Australia, and Africa. Participants 1 AI technology (Generative Pre-trained Transformer 3, GPT-3) and 25 humans. Main outcome measures Plausibility, attractiveness, enjoyability, and educational value of titles for potential Christmas research articles in The BMJ generated by GPT-3 compared with historical controls. Results AI generated titles were rated at least as enjoyable (159/250 responses (64%) v 346/500 responses (69%); odds ratio 0.9, 95% confidence interval 0.7 to 1.2) and attractive (176/250 (70%) v 342/500 (68%); 1.1, 0.8 to 1.4) as real control titles, although the real titles were rated as more plausible (182/250 (73%) v 238/500 (48%); 3.1, 2.3 to 4.1). The AI generated titles overall were rated as having less scientific or educational merit than the real controls (146/250 (58%) v 193/500 (39%); 2.0, 1.5 to 2.6); this difference, however, became non-significant when humans curated the AI output (146/250 (58%) v 123/250 (49%); 1.3, 1.0 to 1.8). Of the AI generated titles, the most plausible was “The association between belief in conspiracy theories and the willingness to receive vaccinations,” and the highest rated was “The effects of free gourmet coffee on emergency department waiting times: an observational study.” Conclusions AI can generate plausible, entertaining, and scientifically interesting titles for potential Christmas research articles in The BMJ ; as in other areas of medicine, performance was enhanced by human intervention. Dataset and full reproducible code are available at .

Automated deep learning-based paradigm for high-risk plaque detection in B-mode common carotid ultrasound scans: an asymptomatic Japanese cohort study


BACKGROUND: The death due to stroke is caused by embolism of the arteries which is due to the rupture of the atherosclerotic lesions in carotid arteries. The lesion formation is over time, and thus, early screening is recommended for asymptomatic and moderate-risk patients. The previous techniques adopted conventional methods or semi-automated and, more recently, machine learning solutions. A handful of studies have emerged based on solo deep learning (SDL) models such as UNet architecture. METHODS: The proposed research is the first to adopt hybrid deep learning (HDL) artificial intelligence models such as SegNet-UNet.

An Artificial Intelligence-Powered Platform for Prostate Cancer Grading


While the Gleason grading system has been the most reliable tool for the prognosis of prostate cancer since its development, its clinical application remains limited. A study examined the impact of an artificial intelligence (AI)-assisted approach to prostate cancer grading and quantification. The findings were published in JAMA Network Open. This diagnostic study was conducted from August 2, 2017, to December 30, 2019. The study consisted of 589 men (mean age, 63.8 years) with biopsy-confirmed prostate cancer who received care in the University of Wisconsin Health System between January 1, 2005, and February 28, 2017.

Formation of Social Ties Influences Food Choice: A Campus-Wide Longitudinal Study Artificial Intelligence

Nutrition is a key determinant of long-term health, and social influence has long been theorized to be a key determinant of nutrition. It has been difficult to quantify the postulated role of social influence on nutrition using traditional methods such as surveys, due to the typically small scale and short duration of studies. To overcome these limitations, we leverage a novel source of data: logs of 38 million food purchases made over an 8-year period on the Ecole Polytechnique Federale de Lausanne (EPFL) university campus, linked to anonymized individuals via the smartcards used to make on-campus purchases. In a longitudinal observational study, we ask: How is a person's food choice affected by eating with someone else whose own food choice is healthy vs. unhealthy? To estimate causal effects from the passively observed log data, we control confounds in a matched quasi-experimental design: we identify focal users who at first do not have any regular eating partners but then start eating with a fixed partner regularly, and we match focal users into comparison pairs such that paired users are nearly identical with respect to covariates measured before acquiring the partner, where the two focal users' new eating partners diverge in the healthiness of their respective food choice. A difference-in-differences analysis of the paired data yields clear evidence of social influence: focal users acquiring a healthy-eating partner change their habits significantly more toward healthy foods than focal users acquiring an unhealthy-eating partner. We further identify foods whose purchase frequency is impacted significantly by the eating partner's healthiness of food choice. Beyond the main results, the work demonstrates the utility of passively sensed food purchase logs for deriving insights, with the potential of informing the design of public health interventions and food offerings.

Modern Multiple Imputation with Functional Data Machine Learning

This work considers the problem of fitting functional models with sparsely and irregularly sampled functional data. It overcomes the limitations of the state-of-the-art methods, which face major challenges in the fitting of more complex non-linear models. Currently, many of these models cannot be consistently estimated unless the number of observed points per curve grows sufficiently quickly with the sample size, whereas, we show numerically that a modified approach with more modern multiple imputation methods can produce better estimates in general. We also propose a new imputation approach that combines the ideas of {\it MissForest} with {\it Local Linear Forest} and compare their performance with {\it PACE} and several other multivariate multiple imputation methods. This work is motivated by a longitudinal study on smoking cessation, in which the Electronic Health Records (EHR) from Penn State PaTH to Health allow for the collection of a great deal of data, with highly variable sampling. To illustrate our approach, we explore the relation between relapse and diastolic blood pressure. We also consider a variety of simulation schemes with varying levels of sparsity to validate our methods.

Using Artificial Intelligence to Improve Prostate Biopsies - Docwire News


Researchers from Google Health found that using artificial intelligence (AI) to aid in the review of prostate biopsies improved the quality, efficiency, and consistency of cancer detection and grading. In a prostate biopsy, tissue is removed and assessed for cell abnormalities that may be linked to prostate cancer. The standard grading system for this procedure is the Gleason grade (GG) system, involving classification into 1 of 5 prognostic groups. Expert-level AI algorithms for prostate biopsy grading, like this one from Gooogle Health, have recently been developed to combat interpathologist variability associated with grading. In this diagnostic study, retrospective grading of prostate core needle biopsies was conducted at two medical laboratories in the US between October 2019 and January 2020.