Chen, Changhong
Uni-ELF: A Multi-Level Representation Learning Framework for Electrolyte Formulation Design
Zeng, Boshen, Chen, Sian, Liu, Xinxin, Chen, Changhong, Deng, Bin, Wang, Xiaoxu, Gao, Zhifeng, Zhang, Yuzhi, E, Weinan, Zhang, Linfeng
Advancements in lithium battery technology heavily rely on the design and engineering of electrolytes. However, current schemes for molecular design and recipe optimization of electrolytes lack an effective computational-experimental closed loop and often fall short in accurately predicting diverse electrolyte formulation properties. In this work, we introduce Uni-ELF, a novel multi-level representation learning framework to advance electrolyte design. Our approach involves two-stage pretraining: reconstructing three-dimensional molecular structures at the molecular level using the Uni-Mol model, and predicting statistical structural properties (e.g., radial distribution functions) from molecular dynamics simulations at the mixture level. Through this comprehensive pretraining, Uni-ELF is able to capture intricate molecular and mixture-level information, which significantly enhances its predictive capability. As a result, Uni-ELF substantially outperforms state-of-the-art methods in predicting both molecular properties (e.g., melting point, boiling point, synthesizability) and formulation properties (e.g., conductivity, Coulombic efficiency). Moreover, Uni-ELF can be seamlessly integrated into an automatic experimental design workflow. We believe this innovative framework will pave the way for automated AI-based electrolyte design and engineering.
SciAssess: Benchmarking LLM Proficiency in Scientific Literature Analysis
Cai, Hengxing, Cai, Xiaochen, Chang, Junhan, Li, Sihang, Yao, Lin, Wang, Changxin, Gao, Zhifeng, Wang, Hongshuai, Li, Yongge, Lin, Mujie, Yang, Shuwen, Wang, Jiankun, Xu, Mingjun, Huang, Jin, Xi, Fang, Zhuang, Jiaxi, Yin, Yuqi, Li, Yaqi, Chen, Changhong, Cheng, Zheng, Zhao, Zifeng, Zhang, Linfeng, Ke, Guolin
Recent breakthroughs in Large Language Models (LLMs) have revolutionized natural language understanding and generation, sparking significant interest in applying them to scientific literature analysis. However, existing benchmarks fail to adequately evaluate the proficiency of LLMs in this domain, particularly in scenarios requiring higher-level abilities beyond mere memorization and the handling of multimodal data. In response to this gap, we introduce SciAssess, a benchmark specifically designed for the comprehensive evaluation of LLMs in scientific literature analysis. SciAssess aims to thoroughly assess the efficacy of LLMs by focusing on their capabilities in Memorization (L1), Comprehension (L2), and Analysis \& Reasoning (L3). It encompasses a variety of tasks drawn from diverse scientific fields, including fundamental science, alloy materials, biomedicine, drug discovery, and organic materials. To ensure the reliability of SciAssess, rigorous quality control measures have been implemented, ensuring accuracy, anonymization, and compliance with copyright standards. SciAssess evaluates 11 LLMs, including GPT, Claude, and Gemini, highlighting their strengths and areas for improvement. This evaluation supports the ongoing development of LLM applications in the analysis of scientific literature. SciAssess and its resources are available at \url{https://sci-assess.github.io/}.