Moro, Viggo
Solving Differential Equations with Constrained Learning
Moro, Viggo, Chamon, Luiz F. O.
While traditional methods, such as the finite element method, provide reliable solutions, their accuracy is often tied to the use of computationally intensive fine meshes. Moreover, they do not naturally account for measurements or prior solutions, and any change in the problem parameters requires results to be fully recomputed. They can also integrate prior knowledge and tackle entire families of PDEs by simply aggregating additional training losses. Nevertheless, they are highly sensitive to hyperparameters such as collocation points and the weights associated with each loss. It demonstrates that finding a (weak) solution of a PDE is equivalent to solving a constrained learning problem with worst-case losses. This explains the limitations of previous methods that minimize the expected value of aggregated losses. SCL also organically integrates structural constraints (e.g., invariances) and (partial) measurements or known solutions. The resulting constrained learning problems can be tackled using a practical algorithm that yields accurate solutions across a variety of PDEs, neural network architectures, and prior knowledge levels without extensive hyperparameter tuning and sometimes even at a lower computational cost. As such, a variety of numerical methods have been used to approximate their solutions, such as the celebrated finite element method (FEM). Despite their approximation guarantees, the accuracy of these methods is often in proportion to the computational complexity associated with finer discretizations. Additionally, they do not naturally incorporate prior knowledge, such as real-world measurements or previous solutions to similar equations. Lastly, any change to the PDE problem, such as initial condition or mesh size, requires the solution to be recomputed (Brenner & Scott, 2007; LeVeque, 2007; Katsikadelis, 2016). Methods based on neural network (NN) architectures, such as physics-informed NNs (PINNs) (Lagaris et al., 1998; Raissi et al., 2019; Lu et al., 2021b) and neural operators (NOs) (Li et al., 2021; Lu et al., 2021a; Rahman et al., 2023), have been proposed in an attempt to address these challenges. Rather than discretizing the PDE, these mesh-free approaches directly fit NNs to its solution.
Multimodal Learning for Crystalline Materials
Moro, Viggo, Loh, Charlotte, Dangovski, Rumen, Ghorashi, Ali, Ma, Andrew, Chen, Zhuo, Lu, Peter Y., Christensen, Thomas, Soljačić, Marin
Artificial intelligence (AI) has revolutionized the field of materials science by improving the prediction of properties and accelerating the discovery of novel materials. In recent years, publicly available material data repositories containing data for various material properties have grown rapidly. In this work, we introduce Multimodal Learning for Crystalline Materials (MLCM), a new method for training a foundation model for crystalline materials via multimodal alignment, where high-dimensional material properties (i.e. modalities) are connected in a shared latent space to produce highly useful material representations. We show the utility of MLCM on multiple axes: (i) MLCM achieves state-of-the-art performance for material property prediction on the challenging Materials Project database; (ii) MLCM enables a novel, highly accurate method for inverse design, allowing one to screen for stable material with desired properties; and (iii) MLCM allows the extraction of interpretable emergent features that may provide insight to material scientists. Further, we explore several novel methods for aligning an arbitrary number of modalities, improving upon prior art in multimodal learning that focuses on bimodal alignment. Our work brings innovations from the ongoing AI revolution into the domain of materials science and identifies materials as a testbed for the next generation of AI.