Generative AI as a metacognitive agent: A comparative mixed-method study with human participants on ICF-mimicking exam performance
Pavlovic, Jelena, Krstic, Jugoslav, Mitrovic, Luka, Babic, Djordje, Milosavljevic, Adrijana, Nikolic, Milena, Karaklic, Tijana, Mitrovic, Tijana
–arXiv.org Artificial Intelligence
Generative AI as a metacognitive agent: A comparative mixed-method study with human participants on ICF-mimicking exam performance Jelena Pavlović University of Belgrade, Faculty of Philosophy & Koučing centar Resarch Lab Jugoslav Krstić, Luka Mitrović, Đorđe Babić, Adrijana Milosavljević, Milena Nikolić, Tijana Karaklić & Tijana Mitrović Koučing centar Research Lab Abstract This study investigates the metacognitive capabilities of Large Language Models (LLMs) relative to human metacognition in the context of the International Coaching Federation (ICF)-mimicking exam, a situational judgment test related to coaching competencies. Using a mixed-method approach, we assessed the metacognitive performance--including sensitivity, accuracy in probabilistic predictions, and bias--of human participants and five advanced LLMs: GPT-4, Claude-3-Opus 3, Mistral Large, Llama 3, and Gemini 1.5 Pro. The results indicate that LLMs outperformed humans across all metacognitive metrics, particularly in terms of reduced overconfidence, compared to humans. However, both LLMs and humans showed less adaptability in ambiguous scenarios, adhering closely to predefined decision frameworks. The study suggests that Generative AI can effectively engage in human-like metacognitive processing without conscious awareness. Implications of the study are discussed in relation to development of AI simulators that scaffold cognitive and metacognitive aspects of mastering coaching competencies. More broadly, implications of these results are discussed in relation to development of metacognitive modules that lead towards more autonomous and intuitive AI systems. Keywords: Generative AI, metacognition, metacognitive agents, ICF exam Introduction Metacognition, the ability to understand and regulate one's cognitive processes, is a fundamental aspect of human learning, decision making and problem solving. Traditionally viewed as a conscious process, metacognition involves activities such as planning, monitoring, and evaluating one's performance during cognitive tasks. However, recent studies suggest that certain metacognitive processes can occur without conscious awareness, challenging the traditional boundaries of how metacognition is understood and measured Kentridge and Heywood (2000). In the field of generative artificial intelligence, particularly in Large Language Models (LLMs), metacognitive-like processes may manifest as algorithms adapt, learn, and optimize performance. This raises intriguing questions about the nature of metacognition in non-conscious entities and its comparison to human metacognitive processes. The present study aims to explore these questions by comparing the metacognitive processes of human participants and LLMs within the context of the International Coaching Federation (ICF) exam performance.
arXiv.org Artificial Intelligence
May-7-2024
- Country:
- Europe > Serbia
- Central Serbia > Belgrade (0.24)
- North America > United States
- Hawaii > Honolulu County
- Honolulu (0.04)
- New York > New York County
- New York City (0.04)
- Hawaii > Honolulu County
- Europe > Serbia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine > Therapeutic Area (1.00)
- Technology: