molecule
SchNet: A continuous-filter convolutional neural network for modeling quantum interactions
Deep learning has the potential to revolutionize quantum chemistry as it is ideally suited to learn representations for structured data and speed up the exploration of chemical space. While convolutional neural networks have proven to be the first choice for images, audio and video data, the atoms in molecules are not restricted to a grid. Instead, their precise locations contain essential physical information, that would get lost if discretized. Thus, we propose to use continuous-filter convolutional layers to be able to model local correlations without requiring the data to lie on a grid. We apply those layers in SchNet: a novel deep learning architecture modeling quantum interactions in molecules. We obtain a joint model for the total energy and interatomic forces that follows fundamental quantum-chemical principles. Our architecture achieves state-of-the-art performance for benchmarks of equilibrium molecules and molecular dynamics trajectories. Finally, we introduce a more challenging benchmark with chemical and structural variations that suggests the path for further work.
Graph Convolutional Policy Network for Goal-Directed Molecular Graph Generation
Generating novel graph structures that optimize given objectives while obeying some given underlying rules is fundamental for chemistry, biology and social science research. This is especially important in the task of molecular graph generation, whose goal is to discover novel molecules with desired properties such as drug-likeness and synthetic accessibility, while obeying physical laws such as chemical valency. However, designing models that finds molecules that optimize desired properties while incorporating highly complex and non-differentiable rules remains to be a challenging task. Here we propose Graph Convolutional Policy Network (GCPN), a general graph convolutional network based model for goal-directed graph generation through reinforcement learning. The model is trained to optimize domain-specific rewards and adversarial loss through policy gradient, and acts in an environment that incorporates domain-specific rules. Experimental results show that GCPN can achieve 61% improvement on chemical property optimization over state-of-the-art baselines while resembling known molecules, and achieve 184% improvement on the constrained property optimization task.
Chemistry may not be the 'killer app' for quantum computers after all
Chemistry may not be the'killer app' for quantum computers after all Quantum chemistry calculations that could advance drug development or agriculture have recently emerged as a promising "killer application" of quantum computers, but a new analysis suggests this is unlikely to be the case. Progress in building quantum computers has greatly accelerated in recent years, but it remains an open question what uses are most likely to justify the ongoing investment in this technology. One popular contender is solving problems in quantum chemistry, such as calculating the energy levels of molecules relevant for biomedicine or industry. This requires accounting for the behavior of many quantum particles - electrons in the molecule - simultaneously, so it seems like a good match for computers made from many quantum parts. Quantum computers have finally arrived, but will they ever be useful? However, Xavier Waintal at CEA Grenoble in France and his colleagues have now shown that two leading quantum computing algorithms for this task may actually have, at best, limited use.
- Europe > France > Auvergne-Rhône-Alpes > Isère > Grenoble (0.25)
- North America > United States (0.05)
- Europe > Switzerland > Zürich > Zürich (0.05)
- Information Technology > Hardware (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence (1.00)
- North America > United States > California > Santa Barbara County > Santa Barbara (0.14)
- Asia > Middle East > Jordan (0.04)
- Research Report > New Finding (0.67)
- Research Report > Experimental Study (0.46)
Permutation-InvariantVariationalAutoencoderfor Graph-LevelRepresentationLearning
Most work, however, focuses on either node-or graph-level supervised learning, such as node, link or graph classification or node-level unsupervised learning (e.g., node clustering). Despite its wide range of possible applications, graph-level unsupervised representation learning has not received much attention yet. This might be mainly attributed to the high representation complexity ofgraphs, which can berepresented byn!equivalent adjacencymatrices, where n is the number of nodes. In this work we address this issue by proposing a permutation-invariant variational autoencoder for graph structured data.
- Asia > China > Beijing > Beijing (0.04)
- North America > United States (0.04)
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
- Research Report (0.46)
- Workflow (0.46)
5f268dfb0fbef44de0f668a022707b86-AuthorFeedback.pdf
Thereason thatthemethod MSO in"Efficient multi-objectivemolecular optimization inacontinuous3 latent space" achieved ahigher penalized logP with unlimited property evaluations than ours (26.1 vs 15.18) isdue4 to different experimental settings. With a8 largerLmax, the best penalized logP score can be significantly increased. Wehavestarted11 running the experiments on GuacaMol as suggested. We will fix these two figures in the final version. All generated molecules in the appendix have been24 double-checked by both RDkit and human experts.
- North America > United States (0.28)
- Asia > China > Hong Kong (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.67)