indobert
NusaBERT: Teaching IndoBERT to be Multilingual and Multicultural
Wongso, Wilson, Setiawan, David Samuel, Limcorn, Steven, Joyoadikusumo, Ananto
Indonesia's linguistic landscape is remarkably diverse, encompassing over 700 languages and dialects, making it one of the world's most linguistically rich nations. This diversity, coupled with the widespread practice of code-switching and the presence of low-resource regional languages, presents unique challenges for modern pre-trained language models. In response to these challenges, we developed NusaBERT, building upon IndoBERT by incorporating vocabulary expansion and leveraging a diverse multilingual corpus that includes regional languages and dialects. Through rigorous evaluation across a range of benchmarks, NusaBERT demonstrates state-of-the-art performance in tasks involving multiple languages of Indonesia, paving the way for future natural language understanding research for under-represented languages.
- Asia > Indonesia > Sulawesi > Gorontalo > Gorontalo (0.04)
- Asia > Indonesia > Sulawesi > South Sulawesi (0.04)
- North America > United States (0.04)
- (19 more...)
Domain-Specific Language Model Post-Training for Indonesian Financial NLP
Maharani, Ni Putu Intan, Yustiawan, Yoga, Rochim, Fauzy Caesar, Purwarianti, Ayu
One of the notable examples Recently, self-supervised pre-training of contextual language is Bidirectional Encoder Representations from Transformers models on large general domain corpora, such as ELMo (BERT), which has become a standard benchmark for training [7], ULM-Fit [8], XLNet [9], GPT [10], BERT [2], and NLP models for various downstream tasks. Another example is IndoBERT [1] have significantly improved performance on IndoBERT, the implementation of BERT specific for Indonesian various natural language processing downstream tasks, including language which also performs well as a building block sentence classification, token classification, and question for training task-specific NLP models for Indonesian language answering. IndoBERT, as the foundation of this research, is an [1]. However, those pre-training works focus on the general implementation of BERT in Indonesian language. IndoBERT domain in which the unlabeled text data are collected from has similar model architecture as BERT in which it is a Web domains, newswire, Wikipedia, and BookCorpus [1], [2].
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > India (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (4 more...)
Web-based Application for Detecting Indonesian Clickbait Headlines using IndoBERT
Fakhruzzaman, Muhammad Noor, Gunawan, Sie Wildan
With increasing usage of clickbaits in Indonesian Online News, newsworthy articles sometimes get buried among clickbaity news. A reliable and lightweight tool is needed to detect such clickbaits on-the-go. Leveraging state-of-the-art natural language processing model BERT, a RESTful API based application is developed. This study offloaded the computing resources needed to train the model on the cloud server, while the client-side application only needs to send a request to the API and the cloud server will handle the rest. This study proposed the design and developed a web-based application to detect clickbait in Indonesian using IndoBERT as a language model. The application usage is discussed and available for public use with a performance of mean ROC-AUC of 89%.
- Asia > Indonesia (0.05)
- North America > United States > Missouri > Boone County > Columbia (0.04)