Polpanumas, Charin
Segment Discovery: Enhancing E-commerce Targeting
Li, Qiqi, Singh, Roopali, Polpanumas, Charin, Fiez, Tanner, Kumar, Namita, Chakrabarti, Shreya
Popular promotions include discounts, bundled offers, free services, etc. By offering these promotions, companies aim to increase revenue and customer base, while also improving customer experience. However, such promotions usually incur a cost and can become unsustainable without any guardrails in place. A popular approach is to target customers with high or low propensity for desired behavior. For example, a retail company is likely to target customers who are at risk of leaving if they want to retain its customers by offering certain incentives. However, previous studies show that this strategy is ineffective and could be detrimental towards the company objectives [2, 6, 7]. Moreover, additional analysis needs to be done for the choice of propensity score threshold for targeting (e.g., target anyone whose propensity to leave is higher than 0.8), because the wrong threshold may lead to sub-optimal outcomes [2]. Each customer responds differently to the same promotion.
ThaiCoref: Thai Coreference Resolution Dataset
Trakuekul, Pontakorn, Leong, Wei Qi, Polpanumas, Charin, Sawatphol, Jitkapat, Tjhi, William Chandra, Rutherford, Attapol T.
While coreference resolution is a well-established research area in Natural Language Processing (NLP), research focusing on Thai language remains limited due to the lack of large annotated corpora. In this work, we introduce ThaiCoref, a dataset for Thai coreference resolution. Our dataset comprises 777,271 tokens, 44,082 mentions and 10,429 entities across four text genres: university essays, newspapers, speeches, and Wikipedia. Our annotation scheme is built upon the OntoNotes benchmark with adjustments to address Thai-specific phenomena. Utilizing ThaiCoref, we train models employing a multilingual encoder and cross-lingual transfer techniques, achieving a best F1 score of 67.88\% on the test set. Error analysis reveals challenges posed by Thai's unique linguistic features. To benefit the NLP community, we make the dataset and the model publicly available at http://www.github.com/nlp-chula/thai-coref .
Thai Universal Dependency Treebank
Sriwirote, Panyut, Leong, Wei Qi, Polpanumas, Charin, Thanyawong, Santhawat, Tjhi, William Chandra, Aroonmanakun, Wirote, Rutherford, Attapol T.
Automatic dependency parsing of Thai sentences has been underexplored, as evidenced by the lack of large Thai dependency treebanks with complete dependency structures and the lack of a published systematic evaluation of state-of-the-art models, especially transformer-based parsers. In this work, we address these problems by introducing Thai Universal Dependency Treebank (TUD), a new largest Thai treebank consisting of 3,627 trees annotated in accordance with the Universal Dependencies (UD) framework. We then benchmark dependency parsing models that incorporate pretrained transformers as encoders and train them on Thai-PUD and our TUD. The evaluation results show that most of our models can outperform other models reported in previous papers and provide insight into the optimal choices of components to include in Thai dependency parsers. The new treebank and every model's full prediction generated in our experiment are made available on a GitHub repository for further study.
PyThaiNLP: Thai Natural Language Processing in Python
Phatthiyaphaibun, Wannaphong, Chaovavanich, Korakot, Polpanumas, Charin, Suriyawongkul, Arthit, Lowphansirikul, Lalita, Chormai, Pattarawat, Limkonchotiwat, Peerat, Suntorntip, Thanathip, Udomcharoenchaikit, Can
We present PyThaiNLP, a free and open-source natural language processing (NLP) library for Thai language implemented in Python. It provides a wide range of software, models, and datasets for Thai language. We first provide a brief historical context of tools for Thai language prior to the development of PyThaiNLP. We then outline the functionalities it provided as well as datasets and pre-trained language models. We later summarize its development milestones and discuss our experience during its development. We conclude by demonstrating how industrial and research communities utilize PyThaiNLP in their work. The library is freely available at https://github.com/pythainlp/pythainlp.