Efficient Biological Data Acquisition through Inference Set Design
Neporozhnii, Ihor, Roy, Julien, Bengio, Emmanuel, Hartford, Jason
–arXiv.org Artificial Intelligence
In drug discovery, highly automated high-throughput laboratories are used to screen a large number of compounds in search of effective drugs. These experiments are expensive, so one might hope to reduce their cost by experimenting on a subset of the compounds, and predicting the outcomes of the remaining experiments. In this work, we model this scenario as a sequential subset selection problem: we aim to select the smallest set of candidates in order to achieve some desired level of accuracy for the system as a whole. Our key observation is that, if there is heterogeneity in the difficulty of the prediction problem across the input space, selectively obtaining the labels for the hardest examples in the acquisition pool will leave only the relatively easy examples to remain in the inference set, leading to better overall system performance. We call this mechanism inference set design, and propose the use of an confidence-based active learning solution to prune out these challenging examples. Our algorithm includes an explicit stopping criterion that stops running the experiments when it is sufficiently confident that the system has reached the target performance. Our empirical studies on image and molecular datasets, as well as a real-world large-scale biological assay, show that active learning for inference set design leads to significant reduction in experimental cost while retaining high system performance. Automated high-throughput screening (HTS) laboratories have enabled scientists to screen large compound libraries to find effective therapeutic compounds and screen whole-genome CRISPR knockouts to understand the effects of genes on cell function (Mayr & Bojanic, 2009; Wildey et al., 2017; Blay et al., 2020; Tom et al., 2024; Fay et al., 2023). However, conducting experiments on every compound or gene in these vast design spaces remains very resource-intensive. Reducing experimental costs without compromising the quality of the generated data would allow us to accelerate biology and pharmaceutical research and expand the set of molecules considered for testing. To avoid costs scaling with the number of experiments, we can train a model from a subset of the target library that has been tested in the lab, and then predict experimental outcomes for the remainder of the library using the trained model (Naik et al., 2013; Reker & Schneider, 2015; Dara et al., 2022), thereby building a hybrid screen of the library.
arXiv.org Artificial Intelligence
Nov-25-2024
- Country:
- North America > United States (0.28)
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Technology: