Talking to the brain: Using Large Language Models as Proxies to Model Brain Semantic Representation
Liu, Xin, Zhang, Ziyue, Nie, Jingxin
–arXiv.org Artificial Intelligence
Traditional psychological experiments utilizing naturalistic stimuli face challenges in manual annotation and ecological validity. To address this, we introduce a novel paradigm leveraging multimodal large language models (LLMs) as proxies to extract rich semantic information from naturalistic images through a Visual Question Answering (VQA) strategy for analyzing human visual semantic representation. LLM-derived repres entations successfully predict esta blished neural activity patterns measured by fMRI (e.g., faces, buildings), validatin g its feasibility and revealing hierarchical semantic organization across cortical regions. A brain semantic network constructed from LLM-derived representations identifies meaningful clusters refl ecting functional and contextual associations. This innovative methodology offers a powerful solution fo r investigating brain semantic organization with naturalistic stimuli, overcoming limitations of traditional annotation methods and paving the way for more ecologically valid explorations of human cognition.
arXiv.org Artificial Intelligence
Feb-25-2025
- Country:
- Asia > China > Guangdong Province (0.28)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (0.68)
- Health Care Technology (0.91)
- Therapeutic Area > Neurology (1.00)
- Leisure & Entertainment (0.93)
- Health & Medicine
- Technology: