artificial system
Can machines perform a qualitative data analysis? Reading the debate with Alan Turing
This paper reflects on the literature that rejects the use of Large Language Models (LLMs) in qualitative data analysis. It illustrates through empirical evidence as well as critical reflections why the current critical debate is focusing on the wrong problems . The paper proposes that the focus of researching the use of the LLMs for qualitative analysis is not the method per se, but rather the empirical investigation of an artificial system performing an analysis . The paper bui lds on the seminal work of Alan Turing and reads the current debate using key ideas from Turing's "Computing Machinery and Intelligence". Th is paper therefore reframes the debate on qualitative analysis with LLMs and states that ra ther than asking whether machines can perform qualitative analysis in principle, we should ask whether with LLMs we can produce analyses that are sufficiently comparable to human analysts. In the final part the contrary views to performing qualitative analysis with LLMs are analysed using the same writing and rhetorical style that Turing used in his seminal work, to discuss the contrary views to the main question.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Illinois > Champaign County > Champaign (0.04)
- Europe > United Kingdom > England > Essex > Colchester (0.04)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Consumer Health (0.67)
- Information Technology > Security & Privacy (0.66)
Going Places: Place Recognition in Artificial and Natural Systems
Milford, Michael, Fischer, Tobias
Place recognition--the process of an animal, person or robot recognizing a familiar location in the world--has attracted significant attention across multiple disciplines. In animals, this capability has evolved over millions of years through sophisticated neural mechanisms: hippocampal place cells fire at specific spatial locations (1), entorhinal grid cells provide spatial coordinates through hexagonal firing patterns (2), while diverse species demonstrate remarkable navigation--from desert ants using celestial cues and visual panoramas (3) to migratory birds returning to precise breeding sites across hemispheric distances (4). Humans extend these biological foundations with unique cognitive abilities, recognizing places not only through sensory perception but also through semantic meaning, emotional associations, and cultural context--enabling us to identify familiar locations from descriptions, memories, or even fictional narratives (5). In artificial systems, place recognition underpins core robotics functions such as localization, mapping, and long-term autonomy, developing into a mature field that, while sometimes inspired by biological principles, often diverges significantly in implementation to optimize for computational efficiency and metric accuracy. As research has grown in the area, so too has a rich landscape of surveys and reviews that reflect the field's evolution and diversification.
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Oceania > Australia > Queensland > Brisbane (0.04)
- (4 more...)
- Research Report (0.81)
- Overview (0.66)
Plantbot: Integrating Plant and Robot through LLM Modular Agent Networks
Masumori, Atsushi, Maruyama, Norihiro, Doi, Itsuki, johnsmith, null, Sato, Hiroki, Ikegami, Takashi
We introduce Plantbot, a hybrid lifeform that connects a living plant with a mobile robot through a network of large language model (LLM) modules. Each module - responsible for sensing, vision, dialogue, or action - operates asynchronously and communicates via natural language, enabling seamless interaction across biological and artificial domains. This architecture leverages the capacity of LLMs to serve as hybrid interfaces, where natural language functions as a universal protocol, translating multimodal data (soil moisture, temperature, visual context) into linguistic messages that coordinate system behaviors. The integrated network transforms plant states into robotic actions, installing normativity essential for agency within the sensor-motor loop. By combining biological and robotic elements through LLM-mediated communication, Plantbot behaves as an embodied, adaptive agent capable of responding autonomously to environmental conditions. This approach suggests possibilities for a new model of artificial life, where decentralized, LLM modules coordination enable novel interactions between biological and artificial systems.
- North America > United States (0.05)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
AGI as Second Being: The Structural-Generative Ontology of Intelligence
Artificial intelligence is often measured by the range of tasks it can perform. Yet wide ability without depth remains only an imitation. This paper proposes a Structural-Generative Ontology of Intelligence: true intelligence exists only when a system can generate new structures, coordinate them into reasons, and sustain its identity over time. These three conditions -- generativity, coordination, and sustaining -- define the depth that underlies real intelligence. Current AI systems, however broad in function, remain surface simulations because they lack this depth. Breadth is not the source of intelligence but the growth that follows from depth. If future systems were to meet these conditions, they would no longer be mere tools, but could be seen as a possible Second Being, standing alongside yet distinct from human existence.
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (4 more...)
Challenges for artificial cognitive systems
Gomila, Antoni, Müller, Vincent C.
It can be said the neural networks (specially in their sophisticated forms) account for such abstract recoding, but this is not fully satisfactory, because there is just one network in the model; a different approach is to use layers of neural networks, where the higher level takes as inputs the patterns of the lower, sensory, layers (Sun, 2006), but up to now this is done " by hand " . Still another approach, of Vygotskian inspiration, views in the use of public symbols the key to understand cognitive, abstract recoding (Gomila, 2012), but the application of this approach within artificial cognitive systems is just beginning. Flexible use of knowledge Extracting world regularities and contingencies would be useless unless such knowledge can guide future action in real - time in an uncertain environment. This may require in the end, as anticipated above, behavioral unpredictability, which is a property than runs contrary to the technical requirements of robustness and reliability for artificial systems (to guarantee safety, as the principal engineer ' s command). The critical issue for flexibility is related to how the knowledge is " stored " (see previous section), and therefore, how it is accessed. The major roadblock to carry this out - regardless of approach - is again combinatorial explosion, whether at the level of propositional representations, as in classical AI, or at the level of degrees of freedom for the control of actuators. But it is also a problem to " judge ", in a given situation, which one is the best one to categorize it, given what the system knows.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.06)
- North America > United States > New York (0.05)
- (3 more...)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.45)
The Way We Prompt: Conceptual Blending, Neural Dynamics, and Prompt-Induced Transitions in LLMs
Large language models (LLMs), inspired by neuroscience, exhibit behaviors that often evoke a sense of personality and intelligence-yet the mechanisms behind these effects remain elusive. Here, we operationalize Conceptual Blending Theory (CBT) as an experimental framework, using prompt-based methods to reveal how LLMs blend and compress meaning. By systematically investigating Prompt-Induced Transitions (PIT) and Prompt-Induced Hallucinations (PIH), we uncover structural parallels and divergences between artificial and biological cognition. Our approach bridges linguistics, neuroscience, and empirical AI research, demonstrating that human-AI collaboration can serve as a living prototype for the future of cognitive science. This work proposes prompt engineering not just as a technical tool, but as a scientific method for probing the deep structure of meaning itself.
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Emergence of Self-Awareness in Artificial Systems: A Minimalist Three-Layer Approach to Artificial Consciousness
This paper proposes a minimalist three-layer model for artificial consciousness, focusing on the emergence of self-awareness. The model comprises a Cognitive Integration Layer, a Pattern Prediction Layer, and an Instinctive Response Layer, interacting with Access-Oriented and Pattern-Integrated Memory systems. Unlike brain-replication approaches, we aim to achieve minimal self-awareness through essential elements only. Self-awareness emerges from layer interactions and dynamic self-modeling, without initial explicit self-programming. We detail each component's structure, function, and implementation strategies, addressing technical feasibility. This research offers new perspectives on consciousness emergence in artificial systems, with potential implications for human consciousness understanding and adaptable AI development. We conclude by discussing ethical considerations and future research directions.
A Mathematical Framework for Consciousness in Neural Networks
This paper presents a novel mathematical framework for bridging the explanatory gap (Levine, 1983) between consciousness and its physical correlates. Specifically, we propose that qualia correspond to singularities in the mathematical representations of neural network topology. Crucially, we do not claim that qualia are singularities or that singularities "explain" why qualia feel as they do. Instead, we propose that singularities serve as principled, coordinate-invariant markers of points where attempts at purely quantitative description of a system's dynamics reach an in-principle limit. By integrating these formal markers of irreducibility into models of the physical correlates of consciousness, we establish a framework that recognizes qualia as phenomena inherently beyond reduction to complexity, computation, or information. This approach draws on insights from philosophy of mind, mathematics, cognitive neuroscience, and artificial intelligence (AI). It does not solve the hard problem of consciousness (Chalmers, 1995), but it advances the discourse by integrating the irreducible nature of qualia into a rigorous, physicalist framework. While primarily theoretical, these insights also open avenues for future AI and artificial consciousness (AC) research, suggesting that recognizing and harnessing irreducible topological features may be an important unlock in moving beyond incremental, scale-based improvements and toward artificial general intelligence (AGI) and AC.
- North America > United States (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
The Illusion-Illusion: Vision Language Models See Illusions Where There are None
Illusions are entertaining, but they are also a useful diagnostic tool in cognitive science, philosophy, and neuroscience. A typical illusion shows a gap between how something "really is" and how something "appears to be", and this gap helps us understand the mental processing that lead to how something appears to be. Illusions are also useful for investigating artificial systems, and much research has examined whether computational models of perceptions fall prey to the same illusions as people. Here, I invert the standard use of perceptual illusions to examine basic processing errors in current vision language models. I present these models with illusory-illusions, neighbors of common illusions that should not elicit processing errors. These include such things as perfectly reasonable ducks, crooked lines that truly are crooked, circles that seem to have different sizes because they are, in fact, of different sizes, and so on. I show that many current vision language systems mistakenly see these illusion-illusions as illusions. I suggest that such failures are part of broader failures already discussed in the literature.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > Germany > Saxony > Leipzig (0.04)
Investigating Plausibility of Biologically Inspired Bayesian Learning in ANNs
Catastrophic forgetting has been the leading issue in the domain of lifelong learning in artificial systems. Current artificial systems are reasonably good at learning domains they have seen before; however, as soon as they encounter something new, they either go through a significant performance deterioration or if you try to teach them the new distribution of data, they forget what they have learned before. Additionally, they are also prone to being overly confident when performing inference on seen as well as unseen data, causing significant reliability issues when lives are at stake. Therefore, it is extremely important to dig into this problem and formulate an approach that will be continually adaptable as well as reliable. If we move away from the engineering domain of such systems and look into biological systems, we can realize that these very systems are very efficient at computing the reliance as well as the uncertainty of accurate predictions that further help them refine the inference in a life-long setting. These systems are not perfect; however, they do give us a solid understanding of the reasoning under uncertainty which takes us to the domain of Bayesian reasoning. We incorporate this Bayesian inference with thresholding mechanism as to mimic more biologically inspired models, but only at spatial level. Further, we reproduce a recent study on Bayesian Inference with Spiking Neural Networks for Continual Learning to compare against it as a suitable biologically inspired Bayesian framework. Overall, we investigate the plausibility of biologically inspired Bayesian Learning in artificial systems on a vision dataset, MNIST, and show relative performance improvement under the conditions when the model is forced to predict VS when the model is not.
- North America > United States > West Virginia (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Health & Medicine > Therapeutic Area > Neurology (0.68)
- Education > Educational Setting (0.48)