Goto

Collaborating Authors

 intentionality


Executable Epistemology: The Structured Cognitive Loop as an Architecture of Intentional Understanding

Kim, Myung Ho

arXiv.org Artificial Intelligence

Large language models exhibit intelligence without genuine epistemic understanding, exposing a key gap: the absence of epistemic architecture. This paper introduces the Structured Cognitive Loop (SCL) as an executable epistemological framework for emergent intelligence. Unlike traditional AI research asking "what is intelligence?" (ontological), SCL asks "under what conditions does cognition emerge?" (epistemological). Grounded in philosophy of mind and cognitive phenomenology, SCL bridges conceptual philosophy and implementable cognition. Drawing on process philosophy, enactive cognition, and extended mind theory, we define intelligence not as a property but as a performed process -- a continuous loop of judgment, memory, control, action, and regulation. SCL makes three contributions. First, it operationalizes philosophical insights into computationally interpretable structures, enabling "executable epistemology" -- philosophy as structural experiment. Second, it shows that functional separation within cognitive architecture yields more coherent and interpretable behavior than monolithic prompt based systems, supported by agent evaluations. Third, it redefines intelligence: not representational accuracy but the capacity to reconstruct its own epistemic state through intentional understanding. This framework impacts philosophy of mind, epistemology, and AI. For philosophy, it allows theories of cognition to be enacted and tested. For AI, it grounds behavior in epistemic structure rather than statistical regularity. For epistemology, it frames knowledge not as truth possession but as continuous reconstruction within a phenomenologically coherent loop. We situate SCL within debates on cognitive phenomenology, emergence, normativity, and intentionality, arguing that real progress requires not larger models but architectures that realize cognitive principles structurally.


The Social Life of Industrial Arms: How Arousal and Attention Shape Human-Robot Interaction

El-Helou, Roy, Pan, Matthew K. X. J

arXiv.org Artificial Intelligence

Ingenuity Labs Research Institute Queen's University Kingston, Canada matthew.pan@queensu.ca Abstract -- This study explores how human perceptions of a non-anthropomorphic robotic manipulator can be shaped by two key dimensions of behaviour: arousal, defined as the robot's movement energy and expressiveness, and attention, defined as the robot's capacity to selectively orient toward and engage with a user . We present an integrated behaviour system that applies and extends existing movement-centric design principles to non-anthropomorphic robots. Our system combines a gaze-like attention engine with an arousal-modulated motion layer to explore how expressive and interactive behaviours influence social perception in robotic manipulators. In a user study, we find that robots exhibiting high attention--actively directing their focus toward users--are perceived as warmer and more competent, intentional, and lifelike. In contrast, high arousal--characterized by fast, expansive, and energetic motions--increases perceptions of discomfort and disturbance. Importantly, a combination of focused attention and moderate arousal yields the highest ratings of trust and sociability, while excessive arousal diminishes social engagement.


Noosemia: toward a Cognitive and Phenomenological Account of Intentionality Attribution in Human-Generative AI Interaction

De Santis, Enrico, Rizzi, Antonello

arXiv.org Artificial Intelligence

This paper introduces and formalizes Noosemìa, a novel cognitive-phenomenological pattern emerging from human interaction with generative AI systems, particularly those enabling dialogic or multimodal exchanges. We propose a multidisciplinary framework to explain how, under certain conditions, users attribute intentionality, agency, and even interiority to these systems - a process grounded not in physical resemblance, but in linguistic performance, epistemic opacity, and emergent technological complexity. By linking an LLM declination of meaning holism to our technical notion of the LLM Contextual Cognitive Field, we clarify how LLMs construct meaning relationally and how coherence and a simulacrum of agency arise at the human-AI interface. The analysis situates noosemia alongside pareidolia, animism, the intentional stance and the uncanny valley, distinguishing its unique characteristics. We also introduce a-noosemia to describe the phenomenological withdrawal of such projections. The paper concludes with reflections on the broader philosophical, epistemological and social implications of noosemic dynamics and directions for future research.


On The Role of Intentionality in Knowledge Representation: Analyzing Scene Context for Cognitive Agents with a Tiny Language Model

Burgess, Mark

arXiv.org Artificial Intelligence

Cognitive abilities, which include ideas like intentionality and consciousness, have long been viewed in Western philosophy as exclusive to the human realm. Intent is roundly considered justifiable only with minimum requirements for self-awareness or situational comprehension. However, such hard line views have softened gradually with modern enlightenment, and more of us are likely to accept that terms such as'agency', 'intelligence', and even'emotion' can apply for other species too. Even plants lean into sunlight in an intentional way; the identification of an intention doesn't have to arise from the plant to be true. Latterly their possibility has been extended even to artificial systems, which some find more acceptable, though a modern version of the privilege argument persists in a distinction between'simple' machinery and'complex' biology, which many believe still holds some principled leap in understanding. Ideological'blood-brain barriers', like these, continue to undermine efforts to form a rational causal explanation of intent, leading extremists to clutch at esoteric straws like quantum mechanics or complexity theory to account for perceived magic. In this note, I address another apparent schism that may shed light on these questions: the difference between process dynamics (the realm of physics) and interpretive semantics (the realm of linguistics and philosophy), and the suggestion that (deep down) intentionality might be a relatively simple phenomenon with an energetic explanation (as trust has been shown to be [9]). The recent acceptance of attention mechanisms in Large Language Models is related example [19, 22].


Human Creativity and AI

Xie, Shengyi

arXiv.org Artificial Intelligence

With the advancement of science and technology, the philosophy of creativity has undergone significant reinterpretation. This paper investigates contemporary research in the fields of psychology, cognitive neuroscience, and the philosophy of creativity, particularly in the context of the development of artificial intelligence (AI) techniques. It aims to address the central question: Can AI exhibit creativity? The paper reviews the historical perspectives on the philosophy of creativity and explores the influence of psychological advancements on the study of creativity. Furthermore, it analyzes various definitions of creativity and examines the responses of naturalism and cognitive neuroscience to the concept of creativity.


Wanting to be Understood

Fernando, Chrisantha, Banarse, Dylan, Osindero, Simon

arXiv.org Artificial Intelligence

This paper explores an intrinsic motivation for mutual awareness, hypothesizing that humans possess a fundamental drive to understand and to be understood even in the absence of extrinsic rewards. Through simulations of the perceptual crossing paradigm, we explore the effect of various internal reward functions in reinforcement learning agents. The drive to understand is implemented as an active inference type artificial curiosity reward, whereas the drive to be understood is implemented through intrinsic rewards for imitation, influence/impressionability, and sub-reaction time anticipation of the other. Results indicate that while artificial curiosity alone does not lead to a preference for social interaction, rewards emphasizing reciprocal understanding successfully drive agents to prioritize interaction. We demonstrate that this intrinsic motivation can facilitate cooperation in tasks where only one agent receives extrinsic reward for the behaviour of the other.


Model-Free RL Agents Demonstrate System 1-Like Intentionality

Ashton, Hal, Franklin, Matija

arXiv.org Artificial Intelligence

This paper argues that model-free reinforcement learning (RL) agents, while lacking explicit planning mechanisms, exhibit behaviours that can be analogised to System 1 ("thinking fast") processes in human cognition. Unlike model-based RL agents, which operate akin to System 2 ("thinking slow") reasoning by leveraging internal representations for planning, model-free agents react to environmental stimuli without anticipatory modelling. We propose a novel framework linking the dichotomy of System 1 and System 2 to the distinction between model-free and model-based RL. This framing challenges the prevailing assumption that intentionality and purposeful behaviour require planning, suggesting instead that intentionality can manifest in the structured, reactive behaviours of model-free agents. By drawing on interdisciplinary insights from cognitive psychology, legal theory, and experimental jurisprudence, we explore the implications of this perspective for attributing responsibility and ensuring AI safety. These insights advocate for a broader, contextually informed interpretation of intentionality in RL systems, with implications for their ethical deployment and regulation.


Vision Language Models See What You Want but not What You See

Gao, Qingying, Li, Yijiang, Lyu, Haiyun, Sun, Haoran, Luo, Dezhi, Deng, Hokin

arXiv.org Artificial Intelligence

Knowing others' intentions and taking others' perspectives are two core components of human intelligence typically considered as instantiations of theory of mind. Infiltrating machines with these abilities is an important step towards building human-level artificial intelligence. We here investigate intentionality understanding and perspective-taking in Vision Language Models and, for the purpose, we have created IntentBench and PerspectBench datasets, which contain over 400 cognitive experiments grounded in real-world scenarios and classic cognitive tasks. Surprisingly, we find that VLMs achieve high performance in intentionality understanding but lower performance in perspective-taking using our two datasets. This challenges the common belief in the cognitive science literature that perspective-taking at the corresponding modality is necessary for intentionality understanding.


Generative midtended cognition and Artificial Intelligence. Thinging with thinging things

Barandiaran, Xabier E., Pérez-Verdugo, Marta

arXiv.org Artificial Intelligence

This paper introduces the concept of ``generative midtended cognition'', exploring the integration of generative AI with human cognition. The term "generative" reflects AI's ability to iteratively produce structured outputs, while "midtended" captures the potential hybrid (human-AI) nature of the process. It stands between traditional conceptions of intended creation, understood directed from within, and extended processes that bring exo-biological processes into the creative process. We examine current generative technologies (based on multimodal transformer architectures typical of large language models like ChatGPT), to explain how they can transform human cognitive agency beyond what standard theories of extended cognition can capture. We suggest that the type of cognitive activity typical of the coupling between a human and generative technologies is closer (but not equivalent) to social cognition than to classical extended cognitive paradigms. Yet, it deserves a specific treatment. We provide an explicit definition of generative midtended cognition in which we treat interventions by AI systems as constitutive of the agent's intentional creative processes. Furthermore, we distinguish two dimensions of generative hybrid creativity: 1. Width: captures the sensitivity of the context of the generative process (from the single letter to the whole historical and surrounding data), 2. Depth: captures the granularity of iteration loops involved in the process. Generative midtended cognition stands in the middle depth between conversational forms of cognition in which complete utterances or creative units are exchanged, and micro-cognitive (e.g. neural) subpersonal processes. Finally, the paper discusses the potential risks and benefits of widespread generative AI adoption, including the challenges of authenticity, generative power asymmetry, and creative boost or atrophy.


AcrosticSleuth: Probabilistic Identification and Ranking of Acrostics in Multilingual Corpora

Fedchin, Aleksandr, Cooperman, Isabel, Chaudhuri, Pramit, Dexter, Joseph P.

arXiv.org Artificial Intelligence

For centuries, writers have hidden messages in their texts as acrostics, where initial letters of consecutive lines or paragraphs form meaningful words or phrases. Scholars searching for acrostics manually can only focus on a few authors at a time and often favor qualitative arguments in discussing intentionally. We aim to put the study of acrostics on firmer statistical footing by presenting AcrosticSleuth, a first-of-its-kind tool that automatically identifies acrostics and ranks them by the probability that the sequence of characters does not occur by chance (and therefore may have been inserted intentionally). Acrostics are rare, so we formalize the problem as a binary classification task in the presence of extreme class imbalance. To evaluate AcrosticSleuth, we present the Acrostic Identification Dataset (AcrostID), a collection of acrostics from the WikiSource online database. Despite the class imbalance, AcrosticSleuth achieves F1 scores of 0.39, 0.59, and 0.66 on French, English, and Russian subdomains of WikiSource, respectively. We further demonstrate that AcrosticSleuth can identify previously unknown high-profile instances of wordplay, such as the acrostic spelling ARSPOETICA (``art of poetry") by Italian Humanist Albertino Mussato and English philosopher Thomas Hobbes' signature in the opening paragraphs of The Elements of Law.