Evaluating Cultural Knowledge Processing in Large Language Models: A Cognitive Benchmarking Framework Integrating Retrieval-Augmented Generation

Lee, Hung-Shin, Chang, Chen-Chi, Chen, Ching-Yuan, Hsu, Yun-Hsiang

arXiv.org Artificial Intelligence 

ABSTRACT Design/methodology/approach This study proposes a cognitive benchmarking framework to evaluate how large language models (LLMs) process and apply culturally specific knowledge. The framework integrates Bloom's Taxonomy with Retrieval - Augmented Generation (RAG) to assess model perform ance across six hierarchical cognitive domains: Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. Using a curated Taiwanese Hakka digital cultural archive as the primary testbed, the evaluation measures LLM - generated responses' sem antic accuracy and cultural relevance. Purpose This research evaluates how effectively LLMs represent and generate minority cultural knowledge, specifically Taiwanese Hakka culture. To address this, the study proposes a structured and replicable evaluation framework integrating Bloom's Taxonomy and RAG . The research is guided by the following questions: (1) How do LLMs perform across different cognitive domains when processing Hakka ...

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found