Evaluating Cultural Knowledge Processing in Large Language Models: A Cognitive Benchmarking Framework Integrating Retrieval-Augmented Generation
Lee, Hung-Shin, Chang, Chen-Chi, Chen, Ching-Yuan, Hsu, Yun-Hsiang
–arXiv.org Artificial Intelligence
ABSTRACT Design/methodology/approach This study proposes a cognitive benchmarking framework to evaluate how large language models (LLMs) process and apply culturally specific knowledge. The framework integrates Bloom's Taxonomy with Retrieval - Augmented Generation (RAG) to assess model perform ance across six hierarchical cognitive domains: Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating. Using a curated Taiwanese Hakka digital cultural archive as the primary testbed, the evaluation measures LLM - generated responses' sem antic accuracy and cultural relevance. Purpose This research evaluates how effectively LLMs represent and generate minority cultural knowledge, specifically Taiwanese Hakka culture. To address this, the study proposes a structured and replicable evaluation framework integrating Bloom's Taxonomy and RAG . The research is guided by the following questions: (1) How do LLMs perform across different cognitive domains when processing Hakka ...
arXiv.org Artificial Intelligence
Nov-4-2025
- Country:
- Asia > Singapore (0.04)
- Europe > Italy (0.04)
- North America > United States
- Idaho > Canyon County
- Nampa (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Idaho > Canyon County
- Genre:
- Research Report
- Experimental Study (0.93)
- New Finding (1.00)
- Research Report
- Industry:
- Technology: