SsciBERT: A Pre-trained Language Model for Social Science Texts
Shen, Si, Liu, Jiangfeng, Lin, Litao, Huang, Ying, Zhang, Lin, Liu, Chang, Feng, Yutong, Wang, Dongbo
–arXiv.org Artificial Intelligence
With its large-scale growth, the ways to quickly find existing research on relevant issues have become an urgent demand for researchers. Previous studies, such as SciBERT, have shown that pre-training using domain-specific texts can improve the performance of natural language processing tasks. However, the pre-trained language model for social sciences is not available so far. In light of this, the present research proposes a pre-trained model based on the abstracts published in the Social Science Citation Index (SSCI) journals.
arXiv.org Artificial Intelligence
Nov-24-2022
- Country:
- Asia
- China
- Hong Kong (0.04)
- Hubei Province > Wuhan (0.04)
- Jiangsu Province > Nanjing (0.05)
- Middle East > Qatar
- China
- Europe
- France > Provence-Alpes-Côte d'Azur
- Bouches-du-Rhône > Marseille (0.04)
- Netherlands > North Brabant
- Eindhoven (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- France > Provence-Alpes-Côte d'Azur
- North America > United States
- California > Los Angeles County
- Long Beach (0.04)
- Colorado > Denver County
- Denver (0.04)
- Florida > Miami-Dade County
- Miami Beach (0.04)
- North Miami Beach (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- Nevada > Clark County
- Las Vegas (0.04)
- California > Los Angeles County
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (1.00)
- Technology: