Tang, Yingheng
MatterChat: A Multi-Modal LLM for Material Science
Tang, Yingheng, Xu, Wenbin, Cao, Jie, Ma, Jianzhu, Gao, Weilu, Farrell, Steve, Erichson, Benjamin, Mahoney, Michael W., Nonaka, Andy, Yao, Zhi
In-silico material discovery and design have traditionally relied on high-fidelity first-principles methods such as density functional theory (DFT) [1] and ab-initio molecular dynamics (AIMD) [2] to accurately model atomic interactions and predict material properties. Despite their effectiveness, these methods face significant challenges due to their prohibitive computational cost, limiting their scalability for highthroughput screening across vast chemical spaces and for simulations over large length and time scales. Moreover, many advanced materials remain beyond the reach of widespread predictive theories due to a fundamental lack of mechanistic understanding. These challenges stem from the inherent complexity of their chemical composition, phase stability, and the intricate interplay of multiple order parameters, compounded by the lack of self-consistent integration between theoretical models and multi-modal experimental findings. As a result, breakthroughs in functional materials, such as new classes of correlated oxides, nitrides, and low-dimensional quantum materials, have largely been serendipitous or guided by phenomenological intuition rather than systematic, theory-driven design. Attempts to predict new materials and functionalities have often led to mixed results, with theoretically proposed systems failing to exhibit the desired properties when synthesized and tested.
Scientific Computing with Diffractive Optical Neural Networks
Chen, Ruiyang, Tang, Yingheng, Ma, Jianzhu, Gao, Weilu
Machine learning (ML) has demonstrated the state-of-the-art performance in a variety of applications, such as computer vision (1, 2), medicine (3), finance (4), autonomous engineering design (5), and scientific computing (6, 7), but performing ML tasks on hardware systems requires substantial energy and computational resources. The fundamental quantum mechanics limit leads to a bottleneck of reducing the energy consumption and simultaneously increasing the integration density of electronic circuits to catch up with the increasing scale of modern large-scale ML models (8,9). Optical architectures are emerging as promising high-throughput and energy-efficient ML hardware accelerators by leveraging the parallelism and low static energy consumption of a fundamentally different particle, photon, for computing (10, 11). Among optical systems, free-space diffractive optical neural networks (DONNs) that are able to host millions of computing neurons and form deep neural network architectures can optically perform ML tasks through the spatial light modulation and optical diffraction of coherent light in multiple diffractive layers (12-30). Prior demonstrations have shown the capability of DONNs systems to recognize input images directly in optical domain.