tensornet
- North America > United States (0.14)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Materials > Chemicals > Commodity Chemicals > Petrochemicals (0.47)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.46)
TensorNet: Cartesian Tensor Representations for Efficient Learning of Molecular Potentials
The development of efficient machine learning models for molecular systems representation is becoming crucial in scientific research. We introduce TensorNet, an innovative O(3)-equivariant message-passing neural network architecture that leverages Cartesian tensor representations. By using Cartesian tensor atomic embeddings, feature mixing is simplified through matrix product operations. Furthermore, the cost-effective decomposition of these tensors into rotation group irreducible representations allows for the separate processing of scalars, vectors, and tensors when necessary. Compared to higher-rank spherical tensor models, TensorNet demonstrates state-of-the-art performance with significantly fewer parameters. For small molecule potential energies, this can be achieved even with a single interaction layer. As a result of all these properties, the model's computational cost is substantially decreased. Moreover, the accurate prediction of vector and tensor molecular quantities on top of potential energies and forces is possible. In summary, TensorNet's framework opens up a new space for the design of state-of-the-art equivariant models.
- North America > United States (0.14)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Materials > Chemicals > Commodity Chemicals > Petrochemicals (0.47)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.46)
TensorNet: Cartesian Tensor Representations for Efficient Learning of Molecular Potentials
The development of efficient machine learning models for molecular systems representation is becoming crucial in scientific research. We introduce TensorNet, an innovative O(3)-equivariant message-passing neural network architecture that leverages Cartesian tensor representations. By using Cartesian tensor atomic embeddings, feature mixing is simplified through matrix product operations. Furthermore, the cost-effective decomposition of these tensors into rotation group irreducible representations allows for the separate processing of scalars, vectors, and tensors when necessary. Compared to higher-rank spherical tensor models, TensorNet demonstrates state-of-the-art performance with significantly fewer parameters.
TorchMD-Net 2.0: Fast Neural Network Potentials for Molecular Simulations
Pelaez, Raul P., Simeon, Guillem, Galvelis, Raimondas, Mirarchi, Antonio, Eastman, Peter, Doerr, Stefan, Thölke, Philipp, Markland, Thomas E., De Fabritiis, Gianni
Achieving a balance between computational speed, prediction accuracy, and universal applicability in molecular simulations has been a persistent challenge. This paper presents substantial advancements in the TorchMD-Net software, a pivotal step forward in the shift from conventional force fields to neural network-based potentials. The evolution of TorchMD-Net into a more comprehensive and versatile framework is highlighted, incorporating cutting-edge architectures such as TensorNet. This transformation is achieved through a modular design approach, encouraging customized applications within the scientific community. The most notable enhancement is a significant improvement in computational efficiency, achieving a very remarkable acceleration in the computation of energy and forces for TensorNet models, with performance gains ranging from 2-fold to 10-fold over previous iterations. Other enhancements include highly optimized neighbor search algorithms that support periodic boundary conditions and the smooth integration with existing molecular dynamics frameworks. Additionally, the updated version introduces the capability to integrate physical priors, further enriching its application spectrum and utility in research. The software is available at https://github.com/torchmd/torchmd-net.
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- Health & Medicine (0.46)
- Government > Regional Government (0.46)
On the Inclusion of Charge and Spin States in Cartesian Tensor Neural Network Potentials
Simeon, Guillem, Mirarchi, Antonio, Pelaez, Raul P., Galvelis, Raimondas, De Fabritiis, Gianni
In this letter, we present an extension to TensorNet, a state-of-the-art equivariant Cartesian tensor neural network potential, allowing it to handle charged molecules and spin states without architectural changes or increased costs. By incorporating these attributes, we address input degeneracy issues, enhancing the model's predictive accuracy across diverse chemical systems. This advancement significantly broadens TensorNet's applicability, maintaining its efficiency and accuracy.
TensorNet: Cartesian Tensor Representations for Efficient Learning of Molecular Potentials
Simeon, Guillem, de Fabritiis, Gianni
The development of efficient machine learning models for molecular systems representation is becoming crucial in scientific research. We introduce Tensor-Net, an innovative O(3)-equivariant message-passing neural network architecture that leverages Cartesian tensor representations. By using Cartesian tensor atomic embeddings, feature mixing is simplified through matrix product operations. Furthermore, the cost-effective decomposition of these tensors into rotation group irreducible representations allows for the separate processing of scalars, vectors, and tensors when necessary. Compared to higher-rank spherical tensor models, TensorNet demonstrates state-of-the-art performance with significantly fewer parameters. For small molecule potential energies, this can be achieved even with a single interaction layer. As a result of all these properties, the model's computational cost is substantially decreased. Moreover, the accurate prediction of vector and tensor molecular quantities on top of potential energies and forces is possible. In summary, TensorNet's framework opens up a new space for the design of state-of-the-art equivariant models.
- North America > United States (0.14)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Materials > Chemicals > Commodity Chemicals > Petrochemicals (0.47)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.47)
Transfer Learning in Tensorflow: Part 2 – Towards Data Science
This is the second part of the Transfer Learning in Tensorflow (VGG19 on CIFAR-10). The first part can be found here. The previous article has given descriptions about'Transfer Learning', 'Choice of Model', 'Choice of the Model Implementation', 'Know How to Create the Model', and'Know About the Last Layer'. In short, the Part 1 is a kind of preparational step before training and prediction. In this article (Part 2), I will go over how to load pre-trained parameters, how to re-scale input images, how to choose batch-size, and then we will look into the result.
TensorNet : Tensorizing Neural Networks – implementation –
Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times.