Goto

Collaborating Authors

 tcm


TCM-5CEval: Extended Deep Evaluation Benchmark for LLM's Comprehensive Clinical Research Competence in Traditional Chinese Medicine

Huang, Tianai, Chen, Jiayuan, Lu, Lu, Chen, Pengcheng, Li, Tianbin, Han, Bing, Tang, Wenchao, Xu, Jie, Li, Ming

arXiv.org Artificial Intelligence

Large language models (LLMs) have demonstrated exceptional capabilities in general domains, yet their application in highly specialized and culturally-rich fields like Traditional Chinese Medicine (TCM) requires rigorous and nuanced evaluation. Building upon prior foundational work such as TCM-3CEval, which highlighted systemic knowledge gaps and the importance of cultural-contextual alignment, we introduce TCM-5CEval, a more granular and comprehensive benchmark. TCM-5CEval is designed to assess LLMs across five critical dimensions: (1) Core Knowledge (TCM-Exam), (2) Classical Literacy (TCM-LitQA), (3) Clinical Decision-making (TCM-MRCD), (4) Chinese Materia Medica (TCM-CMM), and (5) Clinical Non-pharmacological Therapy (TCM-ClinNPT). We conducted a thorough evaluation of fifteen prominent LLMs, revealing significant performance disparities and identifying top-performing models like deepseek\_r1 and gemini\_2\_5\_pro. Our findings show that while models exhibit proficiency in recalling foundational knowledge, they struggle with the interpretative complexities of classical texts. Critically, permutation-based consistency testing reveals widespread fragilities in model inference. All evaluated models, including the highest-scoring ones, displayed a substantial performance degradation when faced with varied question option ordering, indicating a pervasive sensitivity to positional bias and a lack of robust understanding. TCM-5CEval not only provides a more detailed diagnostic tool for LLM capabilities in TCM but aldso exposes fundamental weaknesses in their reasoning stability. To promote further research and standardized comparison, TCM-5CEval has been uploaded to the Medbench platform, joining its predecessor in the "In-depth Challenge for Comprehensive TCM Abilities" special track.


TianHui: A Domain-Specific Large Language Model for Diverse Traditional Chinese Medicine Scenarios

Yin, Ji, He, Menglan, Zhang, Yujie, Zhang, Linshuai, Ma, Tingting, Tian, Ce, Wu, Jie, Xu, Lin, Jiang, Tao

arXiv.org Artificial Intelligence

Background: Currently, domain - specific large language models (LLMs) in traditional Chinese medicine (TCM) are primarily designed for clinical practice and medical education, yet they demonstrate substantial limitations when applied to research contexts owing to inadeq uate adaptability to complex tasks, thereby constraining their scientific utility. Moreover, the absence of comprehensive evaluation datasets and computational resource constraints hinder rigorous performance assessments and prevent extensive comparative o r ablation experiments, ultimately resulting in suboptimal model performance and weakened persuasiveness. Objective: To address these challenges, this study proposed a method for constructing a specialized LLM for the TCM domain based on contextual data integration and domain knowledge fusion and successfully developed a privatized LLM for the TCM profession, TianHui. Methods: Firstly, we acquired a large amount of TCM data, including academic literature resources, published book materials, online public data, and other supplementary materials, and pre - processed them to finally generate the 0.97G unsupervised dataset and 611312 QAs. Then, we adopted a phased training strategy (Pre - Training (PT) and Supervised Fine - Tuning (SFT)) and integrated three key technologies, Quantized Low - Rank Adaptation (QLoRA) parameter efficient fine - tuning, DeepSpeed Stage 2 distributed traini ng optimization, and Flash Attention 2 accelerated computation, to achieve optimal allocation of computational resources while guaranteeing training stability. Finally, we evaluated TianHui using 12 different types of benchmark test datasets and conducted extensive comparison experiments and ablation experiments. Results: The benchmark test data showed that TianHui demonstrated excellent performance in 12 TCM - related application scenarios. It ranked in the top three in each evaluation index in six test datasets: APQ, TCMCD, HFR, HCCA, DHPE, and TLAW. Meanwhile, it achieved optimal performance in all indicators of the six test data sets: TCMEE, APR, GCPMI, TCMKQA, TCMRC, and ADTG.


Intuitionistic $j$-Do-Calculus in Topos Causal Models

Mahadevan, Sridhar

arXiv.org Artificial Intelligence

In this paper, we generalize Pearl's do-calculus to an Intuitionistic setting called $j$-stable causal inference inside a topos of sheaves. Our framework is an elaboration of the recently proposed framework of Topos Causal Models (TCMs), where causal interventions are defined as subobjects. We generalize the original setting of TCM using the Lawvere-Tierney topology on a topos, defined by a modal operator $j$ on the subobject classifier $Ω$. We introduce $j$-do-calculus, where we replace global truth with local truth defined by Kripke-Joyal semantics, and formalize causal reasoning as structure-preserving morphisms that are stable along $j$-covers. $j$-do-calculus is a sound rule system whose premises and conclusions are formulas of the internal Intuitionistic logic of the causal topos. We define $j$-stability for conditional independences and interventional claims as local truth in the internal logic of the causal topos. We give three inference rules that mirror Pearl's insertion/deletion and action/observation exchange, and we prove soundness in the Kripke-Joyal semantics. A companion paper in preparation will describe how to estimate the required entities from data and instantiate $j$-do with standard discovery procedures (e.g., score-based and constraint-based methods), and will include experimental results on how to (i) form data-driven $j$-covers (via regime/section constructions), (ii) compute chartwise conditional independences after graph surgeries, and (iii) glue them to certify the premises of the $j$-do rules in practice


Leveraging Group Relative Policy Optimization to Advance Large Language Models in Traditional Chinese Medicine

Xie, Jiacheng, Zeng, Shuai, Yu, Yang, Tang, Xiaoting, An, Guanghui, Xu, Dong

arXiv.org Artificial Intelligence

Traditional Chinese Medicine (TCM) presents a rich and structurally unique knowledge system that challenges conventional applications of large language models (LLMs). Although previous TCM - specific LLMs have shown progress through supervised fine - tuning, they often face limitations in alignment, data quality, and evaluation consistency. In this study, we introduce Ladder - base, the first TCM - focused LLM trained with Group Relative Policy Optimization (GRPO), a reinforcement learning method that improves reasoning and factual consistency by optimizing response selection based on intra - group comparisons. Ladder - base is built upon the Qwen2.5 - 7B - Instruct foundation model and trained exclusively on the textual subset of the TCM - Ladder benchmark, using 80 percent of the data for training and the remaining 20 percent split evenly between validation and test sets. Through standardized evaluation, Ladder - base demonstrates superior performance across multiple reasoning metrics when compared to both state - of - the - art general - purpose LLMs such as GPT - 4, Gemini 2.5, Claude 3, and Qwen3 and domain - specific TCM models including BenTsao, HuatuoGPT2, and Zhongjing. These findings suggest that GRPO provides an effective and efficient strategy for aligning LLMs with expert - level reasoning in traditional medical domains and supports the development of trustworthy and clinically grounded TCM artificial intelligence systems.


eIQ Neutron: Redefining Edge-AI Inference with Integrated NPU and Compiler Innovations

Bamberg, Lennart, Minnella, Filippo, Bosio, Roberto, Ottati, Fabrizio, Wang, Yuebin, Lee, Jongmin, Lavagno, Luciano, Fuks, Adam

arXiv.org Artificial Intelligence

Neural Processing Units (NPUs) are key to enabling efficient AI inference in resource-constrained edge environments. While peak tera operations per second (TOPS) is often used to gauge performance, it poorly reflects real-world performance and typically rather correlates with higher silicon cost. To address this, architects must focus on maximizing compute utilization, without sacrificing flexibility. This paper presents the eIQ Neutron efficient-NPU, integrated into a commercial flagship MPU, alongside co-designed compiler algorithms. The architecture employs a flexible, data-driven design, while the compiler uses a constrained programming approach to optimize compute and data movement based on workload characteristics. Compared to the leading embedded NPU and compiler stack, our solution achieves an average speedup of 1.8x (4x peak) at equal TOPS and memory resources across standard AI-benchmarks. Even against NPUs with double the compute and memory resources, Neutron delivers up to 3.3x higher performance.


Topos Causal Models

Mahadevan, Sridhar

arXiv.org Artificial Intelligence

We propose topos causal models (TCMs), a novel class of causal models that exploit the key properties of a topos category: they are (co)complete, meaning all (co)limits exist, they admit a subobject classifier, and allow exponential objects. The main goal of this paper is to show that these properties are central to many applications in causal inference. For example, subobject classifiers allow a categorical formulation of causal intervention, which creates sub-models. Limits and colimits allow causal diagrams of arbitrary complexity to be ``solved", using a novel interpretation of causal approximation. Exponential objects enable reasoning about equivalence classes of operations on causal models, such as covered edge reversal and causal homotopy. Analogous to structural causal models (SCMs), TCMs are defined by a collection of functions, each defining a ``local autonomous" causal mechanism that assemble to induce a unique global function from exogenous to endogenous variables. Since the category of TCMs is (co)complete, which we prove in this paper, every causal diagram has a ``solution" in the form of a (co)limit: this implies that any arbitrary causal model can be ``approximated" by some global function with respect to the morphisms going into or out of the diagram. Natural transformations are crucial in measuring the quality of approximation. In addition, we show that causal interventions are modeled by subobject classifiers: any sub-model is defined by a monic arrow into its parent model. Exponential objects permit reasoning about entire classes of causal equivalences and interventions. Finally, as TCMs form a topos, they admit an internal logic defined as a Mitchell-Benabou language with an associated Kripke-Joyal semantics. We show how to reason about causal models in TCMs using this internal logic.


An Interpretable AI framework Quantifying Traditional Chinese Medicine Principles Towards Enhancing and Integrating with Modern Biomedicine

Li, Haoran, Cheng, Xingye, Huang, Ziyang, Luo, Jingyuan, Xu, Qianqian, Zhao, Qiguang, Guo, Tianchen, Zhang, Yumeng, Zhong, Linda Lidan, Bian, Zhaoxiang, Tang, Leihan, Lyu, Aiping, Tian, Liang

arXiv.org Artificial Intelligence

Traditional Chinese Medicine diagnosis and treatment principles, established through centuries of trial-and-error clinical practice, directly maps patient-specific symptom patterns to personalised herbal therapies. These empirical holistic mapping principles offer valuable strategies to address remaining challenges of reductionism methodologies in modern biomedicine. However, the lack of a quantitative framework and molecular-level evidence has limited their interpretability and reliability. Here, we present an AI framework trained on ancient and classical TCM formula records to quantify the symptom pattern-herbal therapy mappings. Interestingly, we find that empirical TCM diagnosis and treatment are consistent with the encoding-decoding processes in the AI model. This enables us to construct an interpretable TCM embedding space (TCM-ES) using the model's quantitative representation of TCM principles. Validated through broad and extensive TCM patient data, the TCM-ES offers universal quantification of the TCM practice and therapeutic efficacy. We further map biomedical entities into the TCM-ES through correspondence alignment. We find that the principal directions of the TCM-ES are significantly associated with key biological functions (such as metabolism, immune, and homeostasis), and that the disease and herb embedding proximity aligns with their genetic relationships in the human protein interactome, which demonstrate the biological significance of TCM principles. Moreover, the TCM-ES uncovers latent disease relationships, and provides alternative metric to assess clinical efficacy for modern disease-drug pairs. Finally, we construct a comprehensive and integrative TCM knowledge graph, which predicts potential associations between diseases and targets, drugs, herbal compounds, and herbal therapies, providing TCM-informed opportunities for disease analysis and drug development.


TCM-3CEval: A Triaxial Benchmark for Assessing Responses from Large Language Models in Traditional Chinese Medicine

Huang, Tianai, Lu, Lu, Chen, Jiayuan, Liu, Lihao, He, Junjun, Zhao, Yuping, Tang, Wenchao, Xu, Jie

arXiv.org Artificial Intelligence

Large language models (LLMs) excel in various NLP tasks and modern medicine, but their evaluation in traditional Chinese medicine (TCM) is underexplored. To address this, we introduce TCM3CEval, a benchmark assessing LLMs in TCM across three dimensions: core knowledge mastery, classical text understanding, and clinical decision-making. We evaluate diverse models, including international (e.g., GPT-4o), Chinese (e.g., InternLM), and medical-specific (e.g., PLUSE). Results show a performance hierarchy: all models have limitations in specialized subdomains like Meridian & Acupoint theory and Various TCM Schools, revealing gaps between current capabilities and clinical needs. Models with Chinese linguistic and cultural priors perform better in classical text interpretation and clinical reasoning. TCM-3CEval sets a standard for AI evaluation in TCM, offering insights for optimizing LLMs in culturally grounded medical domains. The benchmark is available on Medbench's TCM track, aiming to assess LLMs' TCM capabilities in basic knowledge, classic texts, and clinical decision-making through multidimensional questions and real cases.


From Metaphor to Mechanism: How LLMs Decode Traditional Chinese Medicine Symbolic Language for Modern Clinical Relevance

Tang, Jiacheng, Wu, Nankai, Gao, Fan, Dai, Chengxiao, Zhao, Mengyao, Zhao, Xinjie

arXiv.org Artificial Intelligence

--Metaphorical expressions are abundant in Traditional Chinese Medicine (TCM), conveying complex disease mechanisms and holistic health concepts through culturally rich and often abstract terminology. Bridging these metaphors to anatomically driven Western medical (WM) concepts poses significant challenges for both automated language processing and real-world clinical practice. T o address this gap, we propose a novel multi-agent and chain-of-thought (CoT) framework designed to interpret TCM metaphors accurately and map them to WM pathophysiology. Specifically, our approach combines domain-specialized agents (TCM Expert, WM Expert) with a Coordinator Agent, leveraging stepwise chain-of-thought prompts to ensure transparent reasoning and conflict resolution. We detail a methodology for building a metaphor-rich TCM dataset, discuss strategies for effectively integrating multi-agent collaboration and CoT reasoning, and articulate the theoretical underpinnings that guide metaphor interpretation across distinct medical paradigms. We present a comprehensive system design and highlight both the potential benefits and limitations of our approach, while leaving placeholders for future experimental validation. Our work aims to support clinical decision-making, cross-system educational initiatives, and integrated healthcare research, ultimately offering a robust scaffold for reconciling TCM's symbolic language with the mechanistic focus of Western medicine.


Assessing data-driven predictions of band gap and electrical conductivity for transparent conducting materials

Ottomano, Federico, Goulermas, John Y., Gusev, Vladimir, Savani, Rahul, Gaultois, Michael W., Manning, Troy D., Lin, Hai, Manzanera, Teresa P., Poole, Emmeline G., Dyer, Matthew S., Claridge, John B., Alaria, Jon, Daniels, Luke M., Varma, Su, Rimmer, David, Sanderson, Kevin, Rosseinsky, Matthew J.

arXiv.org Artificial Intelligence

Machine Learning (ML) has offered innovative perspectives for accelerating the discovery of new functional materials, leveraging the increasing availability of material databases. Despite the promising advances, data-driven methods face constraints imposed by the quantity and quality of available data. Moreover, ML is often employed in tandem with simulated datasets originating from density functional theory (DFT), and assessed through in-sample evaluation schemes. This scenario raises questions about the practical utility of ML in uncovering new and significant material classes for industrial applications. Here, we propose a data-driven framework aimed at accelerating the discovery of new transparent conducting materials (TCMs), an important category of semiconductors with a wide range of applications. To mitigate the shortage of available data, we create and validate unique experimental databases, comprising several examples of existing TCMs. We assess state-of-the-art (SOTA) ML models for property prediction from the stoichiometry alone. We propose a bespoke evaluation scheme to provide empirical evidence on the ability of ML to uncover new, previously unseen materials of interest. We test our approach on a list of 55 compositions containing typical elements of known TCMs. Although our study indicates that ML tends to identify new TCMs compositionally similar to those in the training data, we empirically demonstrate that it can highlight material candidates that may have been previously overlooked, offering a systematic approach to identify materials that are likely to display TCMs characteristics.