Goto

Collaborating Authors

 Atyrau Region


Beyond Chains: Bridging Large Language Models and Knowledge Bases in Complex Question Answering

Zhu, Yihua, Liu, Qianying, Aizawa, Akiko, Shimodaira, Hidetoshi

arXiv.org Artificial Intelligence

Knowledge Base Question Answering (KBQA) aims to answer natural language questions using structured knowledge from KBs. While LLM-only approaches offer generalization, they suffer from outdated knowledge, hallucinations, and lack of transparency. Chain-based KG-RAG methods address these issues by incorporating external KBs, but are limited to simple chain-structured questions due to the absence of planning and logical structuring. Inspired by semantic parsing methods, we propose PDRR: a four-stage framework consisting of Predict, Decompose, Retrieve, and Reason. Our method first predicts the question type and decomposes the question into structured triples. Then retrieves relevant information from KBs and guides the LLM as an agent to reason over and complete the decomposed triples. Experimental results demonstrate that PDRR consistently outperforms existing methods across various LLM backbones and achieves superior performance on both chain-structured and non-chain complex questions.


Qorgau: Evaluating LLM Safety in Kazakh-Russian Bilingual Contexts

Goloburda, Maiya, Laiyk, Nurkhan, Turmakhan, Diana, Wang, Yuxia, Togmanov, Mukhammed, Mansurov, Jonibek, Sametov, Askhat, Mukhituly, Nurdaulet, Wang, Minghan, Orel, Daniil, Mujahid, Zain Muhammad, Koto, Fajri, Baldwin, Timothy, Nakov, Preslav

arXiv.org Artificial Intelligence

Large language models (LLMs) are known to have the potential to generate harmful content, posing risks to users. While significant progress has been made in developing taxonomies for LLM risks and safety evaluation prompts, most studies have focused on monolingual contexts, primarily in English. However, language- and region-specific risks in bilingual contexts are often overlooked, and core findings can diverge from those in monolingual settings. In this paper, we introduce Qorgau, a novel dataset specifically designed for safety evaluation in Kazakh and Russian, reflecting the unique bilingual context in Kazakhstan, where both Kazakh (a low-resource language) and Russian (a high-resource language) are spoken. Experiments with both multilingual and language-specific LLMs reveal notable differences in safety performance, emphasizing the need for tailored, region-specific datasets to ensure the responsible and safe deployment of LLMs in countries like Kazakhstan. Warning: this paper contains example data that may be offensive, harmful, or biased.


Enhancing Open-Domain Table Question Answering via Syntax- and Structure-aware Dense Retrieval

Jin, Nengzheng, Li, Dongfang, Chen, Junying, Siebert, Joanna, Chen, Qingcai

arXiv.org Artificial Intelligence

Open-domain table question answering aims to provide answers to a question by retrieving and extracting information from a large collection of tables. Existing studies of open-domain table QA either directly adopt text retrieval methods or consider the table structure only in the encoding layer for table retrieval, which may cause syntactical and structural information loss during table scoring. To address this issue, we propose a syntax- and structure-aware retrieval method for the open-domain table QA task. It provides syntactical representations for the question and uses the structural header and value representations for the tables to avoid the loss of fine-grained syntactical and structural information. Then, a syntactical-to-structural aggregator is used to obtain the matching score between the question and a candidate table by mimicking the human retrieval process. Experimental results show that our method achieves the state-of-the-art on the NQ-tables dataset and overwhelms strong baselines on a newly curated open-domain Text-to-SQL dataset.