Goto

Collaborating Authors

 exaone 3


KoSimpleQA: A Korean Factuality Benchmark with an Analysis of Reasoning LLMs

Ko, Donghyeon, Jin, Yeguk, Chae, Kyubyung, Lee, Byungwook, Jo, Chansong, In, Sookyo, Lee, Jaehong, Kim, Taesup, Kwak, Donghyun

arXiv.org Artificial Intelligence

We present $\textbf{Korean SimpleQA (KoSimpleQA)}$, a benchmark for evaluating factuality in large language models (LLMs) with a focus on Korean cultural knowledge. KoSimpleQA is designed to be challenging yet easy to grade, consisting of 1,000 short, fact-seeking questions with unambiguous answers. We conduct a comprehensive evaluation across a diverse set of open-source LLMs of varying sizes that support Korean, and find that even the strongest model generates correct answer only 33.7% of the time, underscoring the challenging nature of KoSimpleQA. Notably, performance rankings on KoSimpleQA differ substantially from those on the English SimpleQA, highlighting the unique value of our dataset. Furthermore, our analysis of reasoning LLMs shows that engaging reasoning capabilities in the factual QA task can both help models better elicit their latent knowledge and improve their ability to abstain when uncertain. KoSimpleQA can be found at https://anonymous.4open.science/r/KoSimpleQA-62EB.


Responsible AI Technical Report

KT, null, :, null, Park, Yunjin, Yoon, Jungwon, Moon, Junhyung, Oh, Myunggyo, Lee, Wonhyuk, Kim, Sujin Kim Youngchol, Kim, Eunmi, Park, Hyoungjun, Shin, Eunyoung, Lee, Wonyoung, Lee, Somin, Ju, Minwook, Noh, Minsung, Jeong, Dongyoung, Kim, Jeongyeop, Park, Wanjin, Bae, Soonmin

arXiv.org Artificial Intelligence

KT developed a Responsible AI (RAI) assessment methodology and risk mitigation technologies to ensure the safety and reliability of AI services. By analyzing the Basic Act on AI implementation and global AI governance trends, we established a unique approach for regulatory compliance and systematically identify and manage all potential risk factors from AI development to operation. We present a reliable assessment methodology that systematically verifies model safety and robustness based on KT's AI risk taxonomy tailored to the domestic environment. We also provide practical tools for managing and mitigating identified AI risks. With the release of this report, we also release proprietary Guardrail : SafetyGuard that blocks harmful responses from AI models in real-time, supporting the enhancement of safety in the domestic AI development ecosystem. We also believe these research outcomes provide valuable insights for organizations seeking to develop Responsible AI.


Fact-Consistency Evaluation of Text-to-SQL Generation for Business Intelligence Using Exaone 3.5

Choi, Jeho

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have shown promise in enabling natural language interfaces for structured data querying through text-to-SQL generation. However, their application in real-world Business Intelligence (BI) contexts remains limited due to semantic hallucinations, structural errors, and a lack of domain-specific evaluation frameworks. In this study, we propose a Fact-Consistency Evaluation Framework for assessing the semantic accuracy of LLM-generated SQL outputs using Exaone 3.5--an instruction-tuned, bilingual LLM optimized for enterprise tasks. We construct a domain-specific benchmark comprising 219 natural language business questions across five SQL complexity levels, derived from actual sales data in LG Electronics' internal BigQuery environment. Each question is paired with a gold-standard SQL query and a validated ground-truth answer. We evaluate model performance using answer accuracy, execution success rate, semantic error rate, and non-response rate. Experimental results show that while Exaone 3.5 performs well on simple aggregation tasks (93% accuracy in L1), it exhibits substantial degradation in arithmetic reasoning (4% accuracy in H1) and grouped ranking tasks (31% in H4), with semantic errors and non-responses concentrated in complex cases. Qualitative error analysis further identifies common failure types such as misapplied arithmetic logic, incomplete filtering, and incorrect grouping operations. Our findings highlight the current limitations of LLMs in business-critical environments and underscore the need for fact-consistency validation layers and hybrid reasoning approaches. This work contributes a reproducible benchmark and evaluation methodology for advancing reliable natural language interfaces to structured enterprise data systems.


Tabular-TX: Theme-Explanation Structure-based Table Summarization via In-Context Learning

Kwack, TaeYoon, Kim, Jisoo, Jung, Ki Yong, Lee, DongGeon, Park, Heesun

arXiv.org Artificial Intelligence

This paper proposes a Theme-Explanation Structure-based Table Summarization (Tabular-TX) pipeline designed to efficiently process table data. Tabular-TX preprocesses table data by focusing on highlighted cells and then generates summary sentences structured with a Theme Part in the form of adverbial phrases followed by an Explanation Part in the form of clauses. In this process, customized analysis is performed by considering the structural characteristics and comparability of the table. Additionally, by utilizing In-Context Learning, Tabular-TX optimizes the analytical capabilities of large language models (LLMs) without the need for fine-tuning, effectively handling the structural complexity of table data. Results from applying the proposed Tabular-TX to generate table-based summaries demonstrated superior performance compared to existing fine-tuning-based methods, despite limitations in dataset size. Experimental results confirmed that Tabular-TX can process complex table data more effectively and established it as a new alternative for table-based question answering and summarization tasks, particularly in resource-constrained environments.


EXAONE 3.5: Series of Large Language Models for Real-world Use Cases

Research, LG AI, An, Soyoung, Bae, Kyunghoon, Choi, Eunbi, Choi, Kibong, Choi, Stanley Jungkyu, Hong, Seokhee, Hwang, Junwon, Jeon, Hyojin, Jo, Gerrard Jeongwon, Jo, Hyunjik, Jung, Jiyeon, Jung, Yountae, Kim, Hyosang, Kim, Joonkee, Kim, Seonghwan, Kim, Soyeon, Kim, Sunkyoung, Kim, Yireun, Kim, Yongil, Kim, Youchul, Lee, Edward Hwayoung, Lee, Haeju, Lee, Honglak, Lee, Jinsik, Lee, Kyungmin, Lim, Woohyung, Park, Sangha, Park, Sooyoun, Park, Yongmin, Yang, Sihoon, Yeen, Heuiyeen, Yun, Hyeongu

arXiv.org Artificial Intelligence

This technical report introduces the EXAONE 3.5 instruction-tuned language models, developed and released by LG AI Research. The EXAONE 3.5 language models are offered in three configurations: 32B, 7.8B, and 2.4B. These models feature several standout capabilities: 1) exceptional instruction following capabilities in real-world scenarios, achieving the highest scores across seven benchmarks, 2) outstanding long-context comprehension, attaining the top performance in four benchmarks, and 3) competitive results compared to state-of-the-art open models of similar sizes across nine general benchmarks. The EXAONE 3.5 language models are open to anyone for research purposes and can be downloaded from https://huggingface.co/LGAI-EXAONE. For commercial use, please reach out to the official contact point of LG AI Research: contact_us@lgresearch.ai.


EXAONE 3.0 7.8B Instruction Tuned Language Model

Research, LG AI, :, null, An, Soyoung, Bae, Kyunghoon, Choi, Eunbi, Choi, Stanley Jungkyu, Choi, Yemuk, Hong, Seokhee, Hong, Yeonjung, Hwang, Junwon, Jeon, Hyojin, Jo, Gerrard Jeongwon, Jo, Hyunjik, Jung, Jiyeon, Jung, Yountae, Kim, Euisoon, Kim, Hyosang, Kim, Joonkee, Kim, Seonghwan, Kim, Soyeon, Kim, Sunkyoung, Kim, Yireun, Kim, Youchul, Lee, Edward Hwayoung, Lee, Haeju, Lee, Honglak, Lee, Jinsik, Lee, Kyungmin, Lee, Moontae, Lee, Seungjun, Lim, Woohyung, Park, Sangha, Park, Sooyoun, Park, Yongmin, Seo, Boseong, Yang, Sihoon, Yeen, Heuiyeen, Yoo, Kyungjae, Yun, Hyeongu

arXiv.org Artificial Intelligence

We introduce EXAONE 3.0 instruction-tuned language model, the first open model in the family of Large Language Models (LLMs) developed by LG AI Research. Among different model sizes, we publicly release the 7.8B instruction-tuned model to promote open research and innovations. Through extensive evaluations across a wide range of public and in-house benchmarks, EXAONE 3.0 demonstrates highly competitive real-world performance with instruction-following capability against other state-of-the-art open models of similar size. Our comparative analysis shows that EXAONE 3.0 excels particularly in Korean, while achieving compelling performance across general tasks and complex reasoning. With its strong real-world effectiveness and bilingual proficiency, we hope that EXAONE keeps contributing to advancements in Expert AI. Our EXAONE 3.0 instruction-tuned model is available at https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct