tianhui
TianHui: A Domain-Specific Large Language Model for Diverse Traditional Chinese Medicine Scenarios
Yin, Ji, He, Menglan, Zhang, Yujie, Zhang, Linshuai, Ma, Tingting, Tian, Ce, Wu, Jie, Xu, Lin, Jiang, Tao
Background: Currently, domain - specific large language models (LLMs) in traditional Chinese medicine (TCM) are primarily designed for clinical practice and medical education, yet they demonstrate substantial limitations when applied to research contexts owing to inadeq uate adaptability to complex tasks, thereby constraining their scientific utility. Moreover, the absence of comprehensive evaluation datasets and computational resource constraints hinder rigorous performance assessments and prevent extensive comparative o r ablation experiments, ultimately resulting in suboptimal model performance and weakened persuasiveness. Objective: To address these challenges, this study proposed a method for constructing a specialized LLM for the TCM domain based on contextual data integration and domain knowledge fusion and successfully developed a privatized LLM for the TCM profession, TianHui. Methods: Firstly, we acquired a large amount of TCM data, including academic literature resources, published book materials, online public data, and other supplementary materials, and pre - processed them to finally generate the 0.97G unsupervised dataset and 611312 QAs. Then, we adopted a phased training strategy (Pre - Training (PT) and Supervised Fine - Tuning (SFT)) and integrated three key technologies, Quantized Low - Rank Adaptation (QLoRA) parameter efficient fine - tuning, DeepSpeed Stage 2 distributed traini ng optimization, and Flash Attention 2 accelerated computation, to achieve optimal allocation of computational resources while guaranteeing training stability. Finally, we evaluated TianHui using 12 different types of benchmark test datasets and conducted extensive comparison experiments and ablation experiments. Results: The benchmark test data showed that TianHui demonstrated excellent performance in 12 TCM - related application scenarios. It ranked in the top three in each evaluation index in six test datasets: APQ, TCMCD, HFR, HCCA, DHPE, and TLAW. Meanwhile, it achieved optimal performance in all indicators of the six test data sets: TCMEE, APR, GCPMI, TCMKQA, TCMRC, and ADTG.
- Health & Medicine > Diagnostic Medicine (0.46)
- Health & Medicine > Health Care Technology > Medical Record (0.46)