Goto

Collaborating Authors

 Shee, Yu


Quantum Machine Learning in Drug Discovery: Applications in Academia and Pharmaceutical Industries

arXiv.org Machine Learning

In this introduction, we discuss the general methodology of quantum computing based on unitary transformations (gates) of quantum registers, which underpin the potential advancements in computational power over classical systems. We introduce the unique properties of quantum bits, or qubits, quantum calculations implemented by algorithms that evolve qubit states through unitary transformations, followed by measurements that collapse the superposition states to produce specific outcomes, and lastly the challenges faced in practical quantum computing limited by noise, with hybrid approaches that integrate quantum and classical computing to address current limitations. This introductory discussion sets the stage for a deeper exploration into quantum computing for machine learning applications in subsequent sections. Calculations with quantum computers generally require evolving the state of a quantum register by applying a sequence of pulses that implement unitary transformations according to a designed algorithm. A measurement of the resulting quantum state then collapses the coherent state, yielding a specific outcome of the calculation. To obtain reliable results, the process is typically repeated thousands of times, with averages taken over all of the measurements to account for quantum randomness and ensure statistical accuracy. This repetition is essential to achieve convergence, as each individual measurement only provides probabilistic information about the quantum state. Quantum registers are commonly based on qubits. Like classical bits, qubits can be observed in either of two possible states (0 or 1).


DirectMultiStep: Direct Route Generation for Multi-Step Retrosynthesis

arXiv.org Artificial Intelligence

Traditional computer-aided synthesis planning (CASP) methods rely on iterative single-step predictions, leading to exponential search space growth that limits efficiency and scalability. We introduce a transformer-based model that directly generates multi-step synthetic routes as a single string by conditionally predicting each molecule based on all preceding ones. The model accommodates specific conditions such as the desired number of steps and starting materials, outperforming state-of-the-art methods on the PaRoutes dataset with a 2.2x improvement in Top-1 accuracy on the n$_1$ test set and a 3.3x improvement on the n$_5$ test set. It also successfully predicts routes for FDA-approved drugs not included in the training data, showcasing its generalization capabilities. While the current suboptimal diversity of the training set may impact performance on less common reaction types, our approach presents a promising direction towards fully automated retrosynthetic planning.


Kernel-Elastic Autoencoder for Molecular Design

arXiv.org Artificial Intelligence

We introduce the Kernel-Elastic Autoencoder (KAE), a self-supervised generative model based on the transformer architecture with enhanced performance for molecular design. KAE is formulated based on two novel loss functions: modified maximum mean discrepancy and weighted reconstruction. KAE addresses the long-standing challenge of achieving valid generation and accurate reconstruction at the same time. KAE achieves remarkable diversity in molecule generation while maintaining near-perfect reconstructions on the independent testing dataset, surpassing previous molecule-generating models. KAE enables conditional generation and allows for decoding based on beam search resulting in state-of-the-art performance in constrained optimizations. Furthermore, KAE can generate molecules conditional to favorable binding affinities in docking applications as confirmed by AutoDock Vina and Glide scores, outperforming all existing candidates from the training dataset. Beyond molecular design, we anticipate KAE could be applied to solve problems by generation in a wide range of applications.