Bian, Fu
PharmaGPT: Domain-Specific Large Language Models for Bio-Pharmaceutical and Chemistry
Chen, Linqing, Wang, Weilei, Bai, Zilong, Xu, Peng, Fang, Yan, Fang, Jie, Wu, Wentao, Zhou, Lizhi, Zhang, Ruiji, Xia, Yubin, Xu, Chaobo, Hu, Ran, Xu, Licong, Cai, Qijun, Hua, Haoran, Sun, Jing, Liu, Jin, Qiu, Tian, Liu, Haowen, Hu, Meng, Li, Xiuwen, Gao, Fei, Wang, Yufu, Tie, Lin, Wang, Chaochao, Lu, Jianping, Sun, Cheng, Wang, Yixin, Yang, Shengjie, Li, Yuancheng, Jin, Lu, Zhang, Lisha, Bian, Fu, Ye, Zhongkai, Pei, Lidong, Tu, Changyang
Large language models (LLMs) have revolutionized Natural Language Processing (NLP) by minimizing the need for complex feature engineering. However, the application of LLMs in specialized domains like biopharmaceuticals and chemistry remains largely unexplored. These fields are characterized by intricate terminologies, specialized knowledge, and a high demand for precision areas where general purpose LLMs often fall short. In this study, we introduce PharmaGPT, a suite of domain specilized LLMs with 13 billion and 70 billion parameters, specifically trained on a comprehensive corpus tailored to the Bio-Pharmaceutical and Chemical domains. Our evaluation shows that PharmaGPT surpasses existing general models on specific-domain benchmarks such as NAPLEX, demonstrating its exceptional capability in domain-specific tasks. Remarkably, this performance is achieved with a model that has only a fraction, sometimes just one-tenth-of the parameters of general-purpose large models. This advancement establishes a new benchmark for LLMs in the bio-pharmaceutical and chemical fields, addressing the existing gap in specialized language modeling. It also suggests a promising path for enhanced research and development, paving the way for more precise and effective NLP applications in these areas.
PatentGPT: A Large Language Model for Intellectual Property
Bai, Zilong, Zhang, Ruiji, Chen, Linqing, Cai, Qijun, Zhong, Yuan, Wang, Cong, Fang, Yan, Fang, Jie, Sun, Jing, Wang, Weikuan, Zhou, Lizhi, Hua, Haoran, Qiu, Tian, Wang, Chaochao, Sun, Cheng, Lu, Jianping, Wang, Yixin, Xia, Yubin, Hu, Meng, Liu, Haowen, Xu, Peng, Xu, Licong, Bian, Fu, Gu, Xiaolong, Zhang, Lisha, Wang, Weilei, Tu, Changyang
In recent years, large language models(LLMs) have attracted significant attention due to their exceptional performance across a multitude of natural language process tasks, and have been widely applied in various fields. However, the application of large language models in the Intellectual Property (IP) domain is challenging due to the strong need for specialized knowledge, privacy protection, processing of extremely long text in this field. In this technical report, we present for the first time a low-cost, standardized procedure for training IP-oriented LLMs, meeting the unique requirements of the IP domain. Using this standard process, we have trained the PatentGPT series models based on open-source pretrained models. By evaluating them on the open-source IP-oriented benchmark MOZIP, our domain-specific LLMs outperforms GPT-4, indicating the effectiveness of the proposed training procedure and the expertise of the PatentGPT models in the IP domain. Remarkably, our model surpassed GPT-4 on the 2019 China Patent Agent Qualification Examination, scoring 65 and matching human expert levels. Additionally, the PatentGPT model, which utilizes the SMoE architecture, achieves performance comparable to that of GPT-4 in the IP domain and demonstrates a better cost-performance ratio on long-text tasks, potentially serving as an alternative to GPT-4 within the IP domain.