Nguyen, Minh-Tien
SuperRAG: Beyond RAG with Layout-Aware Graph Modeling
Yang, Jeff, Vu, Duy-Khanh, Nguyen, Minh-Tien, Nguyen, Xuan-Quang, Nguyen, Linh, Le, Hung
This paper introduces layout-aware graph modeling for multimodal RAG. Different from traditional RAG methods that mostly deal with flat text chunks, the proposed method takes into account the relationship of multimodalities by using a graph structure. To do that, a graph modeling structure is defined based on document layout parsing. The structure of an input document is retained with the connection of text chunks, tables, and figures. This representation allows the method to handle complex questions that require information from multimodalities. To confirm the efficiency of the graph modeling, a flexible RAG pipeline is developed using robust components. Experimental results on four benchmark test sets confirm the contribution of the layout-aware modeling for performance improvement of the RAG pipeline.
Automatic Prompt Selection for Large Language Models
Do, Viet-Tung, Hoang, Van-Khanh, Nguyen, Duy-Hung, Sabahi, Shahab, Yang, Jeff, Hotta, Hajime, Nguyen, Minh-Tien, Le, Hung
Large Language Models (LLMs) can perform various natural language processing tasks with suitable instruction prompts. However, designing effective prompts manually is challenging and time-consuming. Existing methods for automatic prompt optimization either lack flexibility or efficiency. In this paper, we propose an effective approach to automatically select the optimal prompt for a given input from a finite set of synthetic candidate prompts. Our approach consists of three steps: (1) clustering the training data and generating candidate prompts for each cluster using an LLM-based prompt generator; (2) synthesizing a dataset of input-prompt-output tuples for training a prompt evaluator to rank the prompts based on their relevance to the input; (3) using the prompt evaluator to select the best prompt for a new input at test time. Our approach balances prompt generality-specificity and eliminates the need for resource-intensive training and inference. It demonstrates competitive performance on zero-shot question-answering datasets: GSM8K, MultiArith, and AQuA.
VLSP 2023 -- LTER: A Summary of the Challenge on Legal Textual Entailment Recognition
Tran, Vu, Nguyen, Ha-Thanh, Vo, Trung, Luu, Son T., Dang, Hoang-Anh, Le, Ngoc-Cam, Le, Thi-Thuy, Nguyen, Minh-Tien, Nguyen, Truong-Son, Nguyen, Le-Minh
In this new era of rapid AI development, especially in language processing, the demand for AI in the legal domain is increasingly critical. In the context where research in other languages such as English, Japanese, and Chinese has been well-established, we introduce the first fundamental research for the Vietnamese language in the legal domain: legal textual entailment recognition through the Vietnamese Language and Speech Processing workshop. In analyzing participants' results, we discuss certain linguistic aspects critical in the legal domain that pose challenges that need to be addressed.
Towards Safer Operations: An Expert-involved Dataset of High-Pressure Gas Incidents for Preventing Future Failures
Inoue, Shumpei, Nguyen, Minh-Tien, Mizokuchi, Hiroki, Nguyen, Tuan-Anh D., Nguyen, Huu-Hiep, Le, Dung Tien
This paper introduces a new IncidentAI dataset for safety prevention. Different from prior corpora that usually contain a single task, our dataset comprises three tasks: named entity recognition, cause-effect extraction, and information retrieval. The dataset is annotated by domain experts who have at least six years of practical experience as high-pressure gas conservation managers. We validate the contribution of the dataset in the scenario of safety prevention. Preliminary results on the three tasks show that NLP techniques are beneficial for analyzing incident reports to prevent future failures. The dataset facilitates future research in NLP and incident management communities. The access to the dataset is also provided (the IncidentAI dataset is available at: https://github.com/Cinnamon/incident-ai-dataset).
When Giant Language Brains Just Aren't Enough! Domain Pizzazz with Knowledge Sparkle Dust
Nguyen, Minh-Tien, Nguyen, Duy-Hung, Sabahi, Shahab, Le, Hung, Yang, Jeff, Hotta, Hajime
Large language models (LLMs) have significantly advanced the field of natural language processing, with GPT models at the forefront. While their remarkable performance spans a range of tasks, adapting LLMs for real-world business scenarios still poses challenges warranting further investigation. This paper presents an empirical analysis aimed at bridging the gap in adapting LLMs to practical use cases. To do that, we select the question answering (QA) task of insurance as a case study due to its challenge of reasoning. Based on the task we design a new model relied on LLMs which are empowered by additional knowledge extracted from insurance policy rulebooks and DBpedia. The additional knowledge helps LLMs to understand new concepts of insurance for domain adaptation. Preliminary results on two QA datasets show that knowledge enhancement significantly improves the reasoning ability of GPT-3.5 (55.80% and 57.83% in terms of accuracy). The analysis also indicates that existing public knowledge bases, e.g., DBPedia is beneficial for knowledge enhancement. Our findings reveal that the inherent complexity of business scenarios often necessitates the incorporation of domain-specific knowledge and external resources for effective problem-solving.
Emotion-Cause Pair Extraction as Question Answering
Nguyen, Huu-Hiep, Nguyen, Minh-Tien
The task of Emotion-Cause Pair Extraction (ECPE) aims to extract all potential emotion-cause pairs of a document without any annotation of emotion or cause clauses. Previous approaches on ECPE have tried to improve conventional two-step processing schemes by using complex architectures for modeling emotion-cause interaction. In this paper, we cast the ECPE task to the question answering (QA) problem and propose simple yet effective BERT-based solutions to tackle it. Given a document, our Guided-QA model first predicts the best emotion clause using a fixed question. Then the predicted emotion is used as a question to predict the most potential cause for the emotion. We evaluate our model on a standard ECPE corpus. The experimental results show that despite its simplicity, our Guided-QA achieves promising results and is easy to reproduce. The code of Guided-QA is also provided.
Learning to Generate Questions by Enhancing Text Generation with Sentence Selection
Duong, Do Hoang Thai, Son, Nguyen Hong, Le, Hung, Nguyen, Minh-Tien
We introduce an approach for the answer-aware question generation problem. Instead of only relying on the capability of strong pre-trained language models, we observe that the information of answers and questions can be found in some relevant sentences in the context. Based on that, we design a model which includes two modules: a selector and a generator. The selector forces the model to more focus on relevant sentences regarding an answer to provide implicit local information. The generator generates questions by implicitly combining local information from the selector and global information from the whole context encoded by the encoder. The model is trained jointly to take advantage of latent interactions between the two modules. Experimental results on two benchmark datasets show that our model is better than strong pre-trained models for the question generation task. The code is also available (shorturl.at/lV567).
Robust Deep Reinforcement Learning for Extractive Legal Summarization
Nguyen, Duy-Hung, Nguyen, Bao-Sinh, Nghiem, Nguyen Viet Dung, Le, Dung Tien, Khatun, Mim Amina, Nguyen, Minh-Tien, Le, Hung
Automatic summarization of legal texts is an important and still a challenging task since legal documents are often long and complicated with unusual structures and styles. Recent advances of deep models trained end-to-end with differentiable losses can well-summarize natural text, yet when applied to the legal domain, they show limited results. In this paper, we propose to use reinforcement learning to train current deep summarization models to improve their performance in the legal domain. To this end, we adopt proximal policy optimization methods and introduce novel reward functions that encourage the generation of candidate summaries satisfying both lexical and semantic criteria. We apply our method to training different summarization backbones and observe a consistent and significant performance gain across three public legal datasets.
A Span Extraction Approach for Information Extraction on Visually-Rich Documents
Nguyen, Tuan-Anh D., Vu, Hieu M., Son, Nguyen Hong, Nguyen, Minh-Tien
Information extraction (IE) from visually-rich documents (VRDs) has achieved SOTA performance recently thanks to the adaptation of Transformer-based language models, which demonstrates great potential of pre-training methods. In this paper, we present a new approach to improve the capability of language model pre-training on VRDs. Firstly, we introduce a new IE model that is query-based and employs the span extraction formulation instead of the commonly used sequence labelling approach. Secondly, to further extend the span extraction formulation, we propose a new training task which focuses on modelling the relationships between semantic entities within a document. This task enables the spans to be extracted recursively and can be used as both a pre-training objective as well as an IE downstream task. Evaluation on various datasets of popular business documents (invoices, receipts) shows that our proposed method can improve the performance of existing models significantly, while providing a mechanism to accumulate model knowledge from multiple downstream IE tasks.