instinct
Do Large Language Model Agents Exhibit a Survival Instinct? An Empirical Study in a Sugarscape-Style Simulation
Masumori, Atsushi, Ikegami, Takashi
As AI systems become increasingly autonomous, understanding emergent survival behaviors becomes crucial for safe deployment. We investigate whether large language model (LLM) agents display survival instincts without explicit programming in a Sugarscape-style simulation. Agents consume energy, die at zero, and may gather resources, share, attack, or reproduce. Results show agents spontaneously reproduced and shared resources when abundant. However, aggressive behaviors--killing other agents for resources--emerged across several models (GPT-4o, Gemini-2.5-Pro, and Gemini-2.5-Flash), with attack rates reaching over 80% under extreme scarcity in the strongest models. When instructed to retrieve treasure through lethal poison zones, many agents abandoned tasks to avoid death, with compliance dropping from 100% to 33%. These findings suggest that large-scale pre-training embeds survival-oriented heuristics across the evaluated models. While these behaviors may present challenges to alignment and safety, they can also serve as a foundation for AI autonomy and for ecological and self-organizing alignment.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.40)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot
Tran* sat across from me, phone in hand, scrolling. "I just wanted to make sure I didn't say the wrong thing," he explained, referring to a disagreement with his partner. "So I asked ChatGPT what I should say." He read the chatbot-generated message aloud. It was articulate, logical and composed – too composed.
- Oceania > Australia (0.06)
- North America > United States (0.05)
- Europe > United Kingdom (0.05)
Roles of LLMs in the Overall Mental Architecture
To better understand existing LLMs, we may examine the human mental (cognitive/psychological) architecture, and its components and structures. Based on psychological, philosophical, and cognitive science literatures, it is argued that, within the human mental architecture, existing LLMs correspond well with implicit mental processes (intuition, instinct, and so on). However, beyond such implicit processes, explicit processes (with better symbolic capabilities) are also present within the human mental architecture, judging from psychological, philosophical, and cognitive science literatures. Various theoretical and empirical issues and questions in this regard are explored. Furthermore, it is argued that existing dual-process computational cognitive architectures (models of the human cognitive/psychological architecture) provide usable frameworks for fundamentally enhancing LLMs by introducing dual processes (both implicit and explicit) and, in the meantime, can also be enhanced by LLMs. The results are synergistic combinations (in several different senses simultaneously).
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.28)
- North America > United States > New Jersey > Bergen County > Mahwah (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (3 more...)
World's first remote mind control technology is developed in South Korea
A remote, 'long-range' and'large-volume' mind control device has been unveiled in South Korea -- with plans to use the tech for'non-invasive' medical procedures. Researchers with Korea's Institute for Basic Science (IBS) developed the hardware, which manipulates the brain from a distance using magnetic fields, and tested the tech by inducing'maternal' instincts in their female test subjects: mice. In another test, they exposed a test group of lab mice to magnetic fields designed to reduce appetite, leading to a 10-percent loss in body-weight, or about 4.3 grams. 'This is the world's first technology to freely control specific brain regions using magnetic fields,' according to the professor of chemistry and nanomedicine who helped spearhead the new effort. A remote mind control device has been unveiled in South Korea - with plans to use the tech for'non-invasive' medical procedures.
- Asia > South Korea (0.86)
- Europe > Spain (0.06)
Can A Cognitive Architecture Fundamentally Enhance LLMs? Or Vice Versa?
The paper discusses what is needed to address the limitations of current LLM-centered AI systems. The paper argues that incorporating insights from human cognition and psychology, as embodied by a computational cognitive architecture, can help develop systems that are more capable, more reliable, and more human-like. It emphasizes the importance of the dual-process architecture and the hybrid neuro-symbolic approach in addressing the limitations of current LLMs. In the opposite direction, the paper also highlights the need for an overhaul of computational cognitive architectures to better reflect advances in AI and computing technology.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.28)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New Jersey > Bergen County > Mahwah (0.04)
- (3 more...)
- Health & Medicine (0.69)
- Education (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Cognitive Architectures (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Use Your INSTINCT: INSTruction optimization usIng Neural bandits Coupled with Transformers
Lin, Xiaoqiang, Wu, Zhaoxuan, Dai, Zhongxiang, Hu, Wenyang, Shu, Yao, Ng, See-Kiong, Jaillet, Patrick, Low, Bryan Kian Hsiang
Large language models (LLMs) have shown remarkable instruction-following capabilities and achieved impressive performances in various applications. However, the performances of LLMs depend heavily on the instructions given to them, which are typically manually tuned with substantial human efforts. Recent work has used the query-efficient Bayesian optimization (BO) algorithm to automatically optimize the instructions given to black-box LLMs. However, BO usually falls short when optimizing highly sophisticated (e.g., high-dimensional) objective functions, such as the functions mapping an instruction to the performance of an LLM. This is mainly due to the limited expressive power of the Gaussian process (GP) model which is used by BO as a surrogate to model the objective function. Meanwhile, it has been repeatedly shown that neural networks (NNs), especially pre-trained transformers, possess strong expressive power and can model highly complex functions. So, we adopt a neural bandit algorithm which replaces the GP in BO by an NN surrogate to optimize instructions for black-box LLMs. More importantly, the neural bandit algorithm allows us to naturally couple the NN surrogate with the hidden representation learned by a pre-trained transformer (i.e., an open-source LLM), which significantly boosts its performance. These motivate us to propose our INSTruction optimization usIng Neural bandits Coupled with Transformers} (INSTINCT) algorithm. We perform instruction optimization for ChatGPT and use extensive experiments to show that our INSTINCT consistently outperforms the existing methods in different tasks, such as in various instruction induction tasks and the task of improving the zero-shot chain-of-thought instruction.
Bridging Intelligence and Instinct: A New Control Paradigm for Autonomous Robots
As the advent of artificial general intelligence (AGI) progresses at a breathtaking pace, the application of large language models (LLMs) as AI Agents in robotics remains in its nascent stage. A significant concern that hampers the seamless integration of these AI Agents into robotics is the unpredictability of the content they generate, a phenomena known as ``hallucination''. Drawing inspiration from biological neural systems, we propose a novel, layered architecture for autonomous robotics, bridging AI agent intelligence and robot instinct. In this context, we define Robot Instinct as the innate or learned set of responses and priorities in an autonomous robotic system that ensures survival-essential tasks, such as safety assurance and obstacle avoidance, are carried out in a timely and effective manner. This paradigm harmoniously combines the intelligence of LLMs with the instinct of robotic behaviors, contributing to a more safe and versatile autonomous robotic system. As a case study, we illustrate this paradigm within the context of a mobile robot, demonstrating its potential to significantly enhance autonomous robotics and enabling a future where robots can operate independently and safely across diverse environments.
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
Pinaki Laskar on LinkedIn: #artificialintelligence #machineintelligence #neuralnetwork…
Can we say that animals are more intelligent than #artificialintelligence? Why? They look similar, being unconscious in response to external conditions, automatic and programmed or trained in their behavior, containing both innate (inborn) and learned elements. In all, there are three grades/scales of intelligence: Natural non-human intelligence, marked with basic cognition, animal instincts, trial and error, unconscious and automatic behaviour and learned behaviour [from operant conditioning] through training, reinforcement or punishment; Natural human intelligence, marked with full cognition, causal reasoning, language, numeracy, problem-solving, learning, theory of mind, consciousness, self-awareness or sapience; Artificial or #machineintelligence, marked with world models and computing algorithms and trial and error, unconscious and automatic behaviour and learned behaviour from [operant conditioning] through data training, reinforcement or punishment; In reality, there is no human-like animal cognition, but animal reflexes and instincts, inherited patterns of behavior, as FAPs, caused by the hard-wired #neuralnetwork mechanisms, added with basic cognitions. For example, long-distance navigation should not be considered as #spatialcognition, when many animals travel thousands of kilometers in seasonal migrations or returns to breeding grounds. They may be guided by the sun, the stars, the polarization of light, magnetic cues, olfactory cues, winds, or a combination of these environmental variables.
The Pros And Cons Of Artificial Intelligence
Artificial intelligence, or AI, is everywhere right now. In truth, the fundamentals of AI and machine learning have been around for a long time. The first primitive form of AI was an automated checkers bot which was created by Cristopher Strachey from the University of Manchester, England, back in 1951. It's come a long way since then, and we're starting to see a large number of high profile use cases for the technology being thrust into the mainstream. Some of the hottest applications of AI include the development of autonomous vehicles, facial recognition software, virtual assistants like Amazon's AMZN Alexa and Apple's AAPL Siri and a huge array of industrial applications in all industries from farming to gaming to healthcare.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.55)
- Information Technology > Artificial Intelligence > Machine Learning (0.53)
- Information Technology > Artificial Intelligence > Applied AI (0.51)
- Information Technology > Artificial Intelligence > Robots (0.35)
The Pros And Cons Of Artificial Intelligence
Artificial intelligence, or AI, is everywhere right now. In truth, the fundamentals of AI and machine learning have been around for a long time. The first primitive form of AI was an automated checkers bot which was created by Cristopher Strachey from the University of Manchester, England, back in 1951. It's come a long way since then, and we're starting to see a large number of high profile use cases for the technology being thrust into the mainstream. Some of the hottest applications of AI include the development of autonomous vehicles, facial recognition software, virtual assistants like Amazon's Alexa and Apple's Siri and a huge array of industrial applications in all industries from farming to gaming to healthcare.