SsciBERT: A Pre-trained Language Model for Social Science Texts
Shen, Si, Liu, Jiangfeng, Lin, Litao, Huang, Ying, Zhang, Lin, Liu, Chang, Feng, Yutong, Wang, Dongbo
–arXiv.org Artificial Intelligence
With its large-scale growth, the ways to quickly find existing research on relevant issues have become an urgent demand for researchers. Previous studies, such as SciBERT, have shown that pre-training using domain-specific texts can improve the performance of natural language processing tasks. However, the pre-trained language model for social sciences is not available so far. In light of this, the present research proposes a pre-trained model based on the abstracts published in the Social Science Citation Index (SSCI) journals.
arXiv.org Artificial Intelligence
Nov-24-2022
- Country:
- North America > United States > Minnesota (0.28)
- Genre:
- Research Report (1.00)
- Industry:
- Education (0.68)
- Health & Medicine > Therapeutic Area (0.47)
- Technology: