MolPILE -- large-scale, diverse dataset for molecular representation learning
Adamczyk, Jakub, Poziemski, Jakub, Job, Franciszek, Król, Mateusz, Makowski, Maciej
–arXiv.org Artificial Intelligence
The size, diversity, and quality of pretraining datasets critically determine the generalization ability of foundation models. Despite their growing importance in chemoinformatics, the effectiveness of molecular representation learning has been hindered by limitations in existing small molecule datasets. To address this gap, we present MolPILE, large-scale, diverse, and rigorously curated collection of 222 million compounds, constructed from 6 large-scale databases using an automated curation pipeline. We present a comprehensive analysis of current pre-training datasets, highlighting considerable shortcomings for training ML models, and demonstrate how retraining existing models on MolPILE yields improvements in generalization performance. This work provides a standardized resource for model training, addressing the pressing need for an ImageNet-like dataset in molecular chemistry. Modern chemoinformatics relies extensively on machine learning (ML) methods, particularly for virtual ...
arXiv.org Artificial Intelligence
Sep-26-2025
- Country:
- Europe
- Middle East > Malta (0.04)
- Poland
- Lesser Poland Province > Kraków (0.14)
- Masovia Province > Warsaw (0.04)
- Portugal > Coimbra
- Coimbra (0.04)
- North America > United States (0.45)
- Europe
- Genre:
- Research Report (1.00)
- Industry:
- Technology: