water molecule
No One Is Quite Sure Why Ice Is Slippery
A thin, watery layer coating the surface of ice is what makes it slick. The reason we can gracefully glide on an ice-skating rink or clumsily slip on an icy sidewalk is that the surface of ice is coated by a thin watery layer. Scientists generally agree that this lubricating, liquidlike layer is what makes ice slippery. They disagree, though, about why the layer forms. Three main theories about the phenomenon have been debated over the past two centuries.
- Europe > Germany > Saarland (0.06)
- Europe > Netherlands > North Holland > Amsterdam (0.05)
- North America > United States > California (0.04)
- (5 more...)
- North America > United States > New York (0.06)
- North America > United States > Arizona (0.05)
- Health & Medicine (0.72)
- Water & Waste Management > Water Management > Water Supplies & Services (0.52)
- Machinery > Industrial Machinery (0.33)
Predictive Free Energy Simulations Through Hierarchical Distillation of Quantum Hamiltonians
Li, Chenghan, Chan, Garnet Kin-Lic
Obtaining the free energies of condensed phase chemical reactions remains computationally prohibitive for high-level quantum mechanical methods. We introduce a hierarchical machine learning framework that bridges this gap by distilling knowledge from a small number of high-fidelity quantum calculations into increasingly coarse-grained, machine-learned quantum Hamiltonians. By retaining explicit electronic degrees of freedom, our approach further enables a faithful embedding of quantum and classical degrees of freedom that captures long-range electrostatics and the quantum response to a classical environment to infinite order. As validation, we compute the proton dissociation constants of weak acids and the kinetic rate of an enzymatic reaction entirely from first principles, reproducing experimental measurements within chemical accuracy or their uncertainties. Our work demonstrates a path to condensed phase simulations of reaction free energies at the highest levels of accuracy with converged statistics.
- North America > United States > California > Los Angeles County > Pasadena (0.04)
- North America > United States > Kansas > Sheridan County (0.04)
Magistral
Mistral-AI, null, :, null, Rastogi, Abhinav, Jiang, Albert Q., Lo, Andy, Berrada, Gabrielle, Lample, Guillaume, Rute, Jason, Barmentlo, Joep, Yadav, Karmesh, Khandelwal, Kartik, Chandu, Khyathi Raghavi, Blier, Léonard, Saulnier, Lucile, Dinot, Matthieu, Darrin, Maxime, Gupta, Neha, Soletskyi, Roman, Vaze, Sagar, Scao, Teven Le, Wang, Yihan, Yang, Adam, Liu, Alexander H., Sablayrolles, Alexandre, Héliou, Amélie, Martin, Amélie, Ehrenberg, Andy, Agarwal, Anmol, Roux, Antoine, Darcet, Arthur, Mensch, Arthur, Bout, Baptiste, Rozière, Baptiste, De Monicault, Baudouin, Bamford, Chris, Wallenwein, Christian, Renaudin, Christophe, Lanfranchi, Clémence, Dabert, Darius, Mizelle, Devon, Casas, Diego de las, Chane-Sane, Elliot, Fugier, Emilien, Hanna, Emma Bou, Delerce, Gauthier, Guinet, Gauthier, Novikov, Georgii, Martin, Guillaume, Jaju, Himanshu, Ludziejewski, Jan, Chabran, Jean-Hadrien, Delignon, Jean-Malo, Studnia, Joachim, Amar, Jonas, Roberts, Josselin Somerville, Denize, Julien, Saxena, Karan, Jain, Kush, Zhao, Lingxiao, Martin, Louis, Gao, Luyu, Lavaud, Lélio Renard, Pellat, Marie, Guillaumin, Mathilde, Felardos, Mathis, Augustin, Maximilian, Seznec, Mickaël, Raghuraman, Nikhil, Duchenne, Olivier, Wang, Patricia, von Platen, Patrick, Saffer, Patryk, Jacob, Paul, Wambergue, Paul, Kurylowicz, Paula, Muddireddy, Pavankumar Reddy, Chagniot, Philomène, Stock, Pierre, Agrawal, Pravesh, Sauvestre, Romain, Delacourt, Rémi, Gandhi, Sanchit, Subramanian, Sandeep, Dalal, Shashwat, Gandhi, Siddharth, Ghosh, Soham, Mishra, Srijan, Aithal, Sumukh, Antoniak, Szymon, Schueller, Thibault, Lavril, Thibaut, Robert, Thomas, Wang, Thomas, Lacroix, Timothée, Nemychnikova, Valeriia, Paltz, Victor, Richard, Virgile, Li, Wen-Ding, Marshall, William, Zhang, Xuanyu, Tang, Yunhao
We introduce Magistral, Mistral's first reasoning model and our own scalable reinforcement learning (RL) pipeline. Instead of relying on existing implementations and RL traces distilled from prior models, we follow a ground up approach, relying solely on our own models and infrastructure. Notably, we demonstrate a stack that enabled us to explore the limits of pure RL training of LLMs, present a simple method to force the reasoning language of the model, and show that RL on text data alone maintains most of the initial checkpoint's capabilities. We find that RL on text maintains or improves multimodal understanding, instruction following and function calling. We present Magistral Medium, trained for reasoning on top of Mistral Medium 3 with RL alone, and we open-source Magistral Small (Apache 2.0) which further includes cold-start data from Magistral Medium.
Molecular Learning Dynamics
Gusev, Yaroslav, Vanchurin, Vitaly
We apply the physics-learning duality to molecular systems by complementing the physical description of interacting particles with a dual learning description, where each particle is modeled as an agent minimizing a loss function. In the traditional physics framework, the equations of motion are derived from the Lagrangian function, while in the learning framework, the same equations emerge from learning dynamics driven by the agent loss function. The loss function depends on scalar quantities that describe invariant properties of all other agents or particles. To demonstrate this approach, we first infer the loss functions of oxygen and hydrogen directly from a dataset generated by the CP2K physics-based simulation of water molecules. We then employ the loss functions to develop a learning-based simulation of water molecules, which achieves comparable accuracy while being significantly more computationally efficient than standard physics-based simulations.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Minnesota > St. Louis County > Duluth (0.04)
- North America > United States > Minnesota > Saint Louis County > Duluth (0.04)
- (4 more...)
DAST: Difficulty-Adaptive Slow-Thinking for Large Reasoning Models
Shen, Yi, Zhang, Jian, Huang, Jieyun, Shi, Shuming, Zhang, Wenjing, Yan, Jiangze, Wang, Ning, Wang, Kai, Lian, Shiguo
Recent advancements in slow-thinking reasoning models have shown exceptional performance in complex reasoning tasks. However, these models often exhibit overthinking-generating redundant reasoning steps for simple problems, leading to excessive computational resource usage. While current mitigation strategies uniformly reduce reasoning tokens, they risk degrading performance on challenging tasks that require extended reasoning. This paper introduces Difficulty-Adaptive Slow-Thinking (DAST), a novel framework that enables models to autonomously adjust the length of Chain-of-Thought(CoT) based on problem difficulty. We first propose a Token Length Budget (TLB) metric to quantify difficulty, then leveraging length-aware reward shaping and length preference optimization to implement DAST. DAST penalizes overlong responses for simple tasks while incentivizing sufficient reasoning for complex problems. Experiments on diverse datasets and model scales demonstrate that DAST effectively mitigates overthinking (reducing token usage by over 30\% on average) while preserving reasoning accuracy on complex problems.
Solvation Free Energies from Neural Thermodynamic Integration
Máté, Bálint, Fleuret, François, Bereau, Tristan
We present a method for computing free-energy differences using thermodynamic integration with a neural network potential that interpolates between two target Hamiltonians. The interpolation is defined at the sample distribution level, and the neural network potential is optimized to match the corresponding equilibrium potential at every intermediate time-step. Once the interpolating potentials and samples are well-aligned, the free-energy difference can be estimated using (neural) thermodynamic integration. To target molecular systems, we simultaneously couple Lennard-Jones and electrostatic interactions and model the rigid-body rotation of molecules. We report accurate results for several benchmark systems: a Lennard-Jones particle in a Lennard-Jones fluid, as well as the insertion of both water and methane solutes in a water solvent at atomistic resolution using a simple three-body neural-network potential.
Accelerated Hydration Site Localization and Thermodynamic Profiling
Hinz, Florian B., Masters, Matthew R., Kieu, Julia N., Mahmoud, Amr H., Lill, Markus A.
Water plays a fundamental role in the structure and function of proteins and other biomolecules. The thermodynamic profile of water molecules surrounding a protein are critical for ligand binding and recognition. Therefore, identifying the location and thermodynamic behavior of relevant water molecules is important for generating and optimizing lead compounds for affinity and selectivity to a given target. Computational methods have been developed to identify these hydration sites, but are largely limited to simplified models that fail to capture multi-body interactions, or dynamics-based methods that rely on extensive sampling. Here we present a method for fast and accurate localization and thermodynamic profiling of hydration sites for protein structures. The method is based on a geometric deep neural network trained on a large, novel dataset of explicit water molecular dynamics simulations. We confirm the accuracy and robustness of our model on experimental data and demonstrate it's utility on several case studies.
- North America > United States (0.14)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Europe > Finland > North Karelia > Joensuu (0.04)
Latent Ewald summation for machine learning of long-range interactions
Message passing neural networks (MPNNs) [18-learn from reference quantum mechanical calculations 21] employ a number of graph convolution layers to communicate and then predict the energy and forces of atomic configurations information between atoms, thereby capturing quickly, thus allowing for a more accurate long-range interaction up to the local cutoff radius times and comprehensive exploration of material and molecular the number of layers. However, if parts of the system are properties at scale [1, 2]. Most state-of-the-art MLIP disconnected on the graph, e.g. two molecules with a distance methods use a short-range approximation: the effective beyond the cutoff, the message passing scheme does potential energy surface experienced by one atom is determined not help. A very interesting approach is the long-distance by its atomic neighborhood.
- North America > United States > California > Alameda County > Berkeley (0.14)
- Europe > Austria (0.04)
Probing the effects of broken symmetries in machine learning
Langer, Marcel F., Pozdnyakov, Sergey N., Ceriotti, Michele
Symmetry is one of the most central concepts in physics, and it is no surprise that it has also been widely adopted as an inductive bias for machine-learning models applied to the physical sciences. This is especially true for models targeting the properties of matter at the atomic scale. Both established and state-of-the-art approaches, with almost no exceptions, are built to be exactly equivariant to translations, permutations, and rotations of the atoms. Incorporating symmetries -- rotations in particular -- constrains the model design space and implies more complicated architectures that are often also computationally demanding. There are indications that non-symmetric models can easily learn symmetries from data, and that doing so can even be beneficial for the accuracy of the model. We put a model that obeys rotational invariance only approximately to the test, in realistic scenarios involving simulations of gas-phase, liquid, and solid water. We focus specifically on physical observables that are likely to be affected -- directly or indirectly -- by symmetry breaking, finding negligible consequences when the model is used in an interpolative, bulk, regime. Even for extrapolative gas-phase predictions, the model remains very stable, even though symmetry artifacts are noticeable. We also discuss strategies that can be used to systematically reduce the magnitude of symmetry breaking when it occurs, and assess their impact on the convergence of observables.
- Europe > Switzerland > Vaud > Lausanne (0.05)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > United States > Tennessee > Davidson County > Nashville (0.04)
- North America > Canada > Quebec > Montreal (0.04)