Earnshaw, Berton
Shaping Inductive Bias in Diffusion Models through Frequency-Based Noise Control
Jiralerspong, Thomas, Earnshaw, Berton, Hartford, Jason, Bengio, Yoshua, Scimeca, Luca
Diffusion Probabilistic Models (DPMs) are powerful generative models that have achieved unparalleled success in a number of generative tasks. In this work, we aim to build inductive biases into the training and sampling of diffusion models to better accommodate the target distribution of the data to model. For topologically structured data, we devise a frequency-based noising operator to purposefully manipulate, and set, these inductive biases. We first show that appropriate manipulations of the noising forward process can lead DPMs to focus on particular aspects of the distribution to learn. We show that different datasets necessitate different inductive biases, and that appropriate frequency-based noise control induces increased generative performance compared to standard diffusion. Finally, we demonstrate the possibility of ignoring information at particular frequencies while learning. We show this in an image corruption and recovery task, where we train a DPM to recover the original target distribution after severe noise corruption.
Masked Autoencoders for Microscopy are Scalable Learners of Cellular Biology
Kraus, Oren, Kenyon-Dean, Kian, Saberian, Saber, Fallah, Maryam, McLean, Peter, Leung, Jess, Sharma, Vasudev, Khan, Ayla, Balakrishnan, Jia, Celik, Safiye, Beaini, Dominique, Sypetkowski, Maciej, Cheng, Chi Vicky, Morse, Kristen, Makes, Maureen, Mabey, Ben, Earnshaw, Berton
Featurizing microscopy images for use in biological research remains a significant challenge, especially for large-scale experiments spanning millions of images. This work explores the scaling properties of weakly supervised classifiers and self-supervised masked autoencoders (MAEs) when training with increasingly larger model backbones and microscopy datasets. Our results show that ViT-based MAEs outperform weakly supervised classifiers on a variety of tasks, achieving as much as a 11.5% relative improvement when recalling known biological relationships curated from public databases. Additionally, we develop a new channel-agnostic MAE architecture (CA-MAE) that allows for inputting images of different numbers and orders of channels at inference time. We demonstrate that CA-MAEs effectively generalize by inferring and evaluating on a microscopy image dataset (JUMP-CP) generated under different experimental conditions with a different channel structure than our pretraining data (RPI-93M). Our findings motivate continued research into scaling self-supervised learning on microscopy data in order to create powerful foundation models of cellular biology that have the potential to catalyze advancements in drug discovery and beyond.
Masked Autoencoders are Scalable Learners of Cellular Morphology
Kraus, Oren, Kenyon-Dean, Kian, Saberian, Saber, Fallah, Maryam, McLean, Peter, Leung, Jess, Sharma, Vasudev, Khan, Ayla, Balakrishnan, Jia, Celik, Safiye, Sypetkowski, Maciej, Cheng, Chi Vicky, Morse, Kristen, Makes, Maureen, Mabey, Ben, Earnshaw, Berton
Inferring biological relationships from cellular phenotypes in high-content microscopy screens provides significant opportunity and challenge in biological research. Prior results have shown that deep vision models can capture biological signal better than hand-crafted features. This work explores how self-supervised deep learning approaches scale when training larger models on larger microscopy datasets. Our results show that both CNN- and ViT-based masked autoencoders significantly outperform weakly supervised baselines. At the high-end of our scale, a ViT-L/8 trained on over 3.5-billion unique crops sampled from 93-million microscopy images achieves relative improvements as high as 28% over our best weakly supervised baseline at inferring known biological relationships curated from public databases. Relevant code and select models released with this work can be found at: https://github.com/recursionpharma/maes_microscopy.
MolE: a molecular foundation model for drug discovery
Méndez-Lucio, Oscar, Nicolaou, Christos, Earnshaw, Berton
Models that accurately predict properties based on chemical structure are valuable tools in drug discovery. However, for many properties, public and private training sets are typically small, and it is difficult for the models to generalize well outside of the training data. Recently, large language models have addressed this problem by using self-supervised pretraining on large unlabeled datasets, followed by fine-tuning on smaller, labeled datasets. In this paper, we report MolE, a molecular foundation model that adapts the DeBERTa architecture to be used on molecular graphs together with a two-step pretraining strategy. The first step of pretraining is a self-supervised approach focused on learning chemical structures, and the second step is a massive multi-task approach to learn biological information. We show that fine-tuning pretrained MolE achieves state-of-the-art results on 9 of the 22 ADMET tasks included in the Therapeutic Data Commons.