Atom-anchored LLMs speak Chemistry: A Retrosynthesis Demonstration
Hassen, Alan Kai, Bernatavicius, Andrius, Janssen, Antonius P. A., Preuss, Mike, van Westen, Gerard J. P., Clevert, Djork-Arné
–arXiv.org Artificial Intelligence
Applications of machine learning in chemistry are often limited by the scarcity and expense of labeled data, restricting traditional supervised methods. In this work, we introduce a framework for molecular reasoning using general-purpose Large Language Models (LLMs) that operates without requiring labeled training data. Our method anchors chain-of-thought reasoning to the molecular structure by using unique atomic identifiers. First, the LLM performs a one-shot task to identify relevant fragments and their associated chemical labels or transformation classes. In an optional second step, this position-aware information is used in a few-shot task with provided class examples to predict the chemical transformation. We apply our framework to single-step retrosynthesis, a task where LLMs have previously underperformed. Across academic benchmarks and expert-validated drug discovery molecules, our work enables LLMs to achieve high success rates in identifying chemically plausible reaction sites ($\geq90\%$), named reaction classes ($\geq40\%$), and final reactants ($\geq74\%$). Beyond solving complex chemical tasks, our work also provides a method to generate theoretically grounded synthetic datasets by mapping chemical knowledge onto the molecular structure and thereby addressing data scarcity.
arXiv.org Artificial Intelligence
Oct-21-2025
- Country:
- Europe
- Germany > Berlin (0.04)
- Netherlands > South Holland
- Leiden (0.05)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- North America > United States
- Louisiana > Orleans Parish > New Orleans (0.04)
- Europe
- Genre:
- Research Report (1.00)
- Industry:
- Technology: