Goto

Collaborating Authors

 adaptive behavior


Gearshift Fellowship: A Next-Generation Neurocomputational Game Platform to Model and Train Human-AI Adaptability

Ging-Jehli, Nadja R., Childers, Russell K., Lu, Joshua, Gemma, Robert, Zhu, Rachel

arXiv.org Artificial Intelligence

How do we learn when to persist, when to let go, and when to shift gears? Gearshift Fellowship (GF) is the prototype of a new Supertask paradigm designed to model how humans and artificial agents adapt to shifting environment demands. Grounded in cognitive neuroscience, computational psychiatry, economics, and artificial intelligence, Supertasks combine computational neurocognitive modeling with serious gaming. This creates a dynamic, multi-mission environment engineered to assess mechanisms of adaptive behavior across cognitive and social contexts. Computational parameters explain behavior and probe mechanisms by controlling the game environment. Unlike traditional tasks, GF enables neurocognitive modeling of individual differences across perceptual decisions, learning, and meta-cognitive levels. This positions GF as a flexible testbed for understanding how cognitive-affective control processes, learning styles, strategy use, and motivational shifts adapt across contexts and over time. It serves as an experimental platform for scientists, a phenotype-to-mechanism intervention for clinicians, and a training tool for players aiming to strengthen self-regulated learning, mood, and stress resilience. Online study (n = 60, ongoing) results show that GF recovers effects from traditional neuropsychological tasks (construct validity), uncovers novel patterns in how learning differs across contexts and how clinical features map onto distinct adaptations. These findings pave the way for developing in-game interventions that foster self-efficacy and agency to cope with real-world stress and uncertainty. GF builds a new adaptive ecosystem designed to accelerate science, transform clinical care, and foster individual growth. It offers a mirror and training ground where humans and machines co-develop together deeper flexibility and awareness.


SymRAG: Efficient Neuro-Symbolic Retrieval Through Adaptive Query Routing

Hakim, Safayat Bin, Adil, Muhammad, Velasquez, Alvaro, Song, Houbing Herbert

arXiv.org Artificial Intelligence

Current Retrieval-Augmented Generation systems use uniform processing, causing inefficiency as simple queries consume resources similar to complex multi-hop tasks. We present SymRAG, a framework that introduces adaptive query routing via real-time complexity and load assessment to select symbolic, neural, or hybrid pathways. SymRAG's neuro-symbolic approach adjusts computational pathways based on both query characteristics and system load, enabling efficient resource allocation across diverse query types. By combining linguistic and structural query properties with system load metrics, SymRAG allocates resources proportional to reasoning requirements. Evaluated on 2,000 queries across HotpotQA (multi-hop reasoning) and DROP (discrete reasoning) using Llama-3.2-3B and Mistral-7B models, SymRAG achieves competitive accuracy (97.6--100.0% exact match) with efficient resource utilization (3.6--6.2% CPU utilization, 0.985--3.165s processing). Disabling adaptive routing increases processing time by 169--1151%, showing its significance for complex models. These results suggest adaptive computation strategies are more sustainable and scalable for hybrid AI systems that use dynamic routing and neuro-symbolic frameworks.


Adaptive Intelligence: leveraging insights from adaptive behavior in animals to build flexible AI systems

Mathis, Mackenzie Weygandt

arXiv.org Artificial Intelligence

Biological intelligence is inherently adaptive -- animals continually adjust their actions based on environmental feedback. However, creating adaptive artificial intelligence (AI) remains a major challenge. The next frontier is to go beyond traditional AI to develop "adaptive intelligence," defined here as harnessing insights from biological intelligence to build agents that can learn online, generalize, and rapidly adapt to changes in their environment. Recent advances in neuroscience offer inspiration through studies that increasingly focus on how animals naturally learn and adapt their world models. In this Perspective, I will review the behavioral and neural foundations of adaptive biological intelligence, the parallel progress in AI, and explore brain-inspired approaches for building more adaptive algorithms.


Adaptive Motion Generation Using Uncertainty-Driven Foresight Prediction

Hiruma, Hyogo, Ito, Hiroshi, Ogata, Tetusya

arXiv.org Artificial Intelligence

Uncertainty of environments has long been a difficult characteristic to handle, when performing real-world robot tasks. This is because the uncertainty produces unexpected observations that cannot be covered by manual scripting. Learning based robot controlling methods are a promising approach for generating flexible motions against unknown situations, but still tend to suffer under uncertainty due to its deterministic nature. In order to adaptively perform the target task under such conditions, the robot control model must be able to accurately understand the possible uncertainty, and to exploratively derive the optimal action that minimizes such uncertainty. This paper extended an existing predictive learning based robot control method, which employ foresight prediction using dynamic internal simulation. The foresight module refines the model's hidden states by sampling multiple possible futures and replace with the one that led to the lower future uncertainty. The adaptiveness of the model was evaluated on a door opening task. The door can be opened either by pushing, pulling, or sliding, but robot cannot visually distinguish which way, and is required to adapt on the fly. The results showed that the proposed model adaptively diverged its motion through interaction with the door, whereas conventional methods failed to stably diverge. The models were analyzed on Lyapunov exponents of RNN hidden states which reflect the possible divergence at each time step during task execution. The result indicated that the foresight module biased the model to consider future consequences, which lead to embedding uncertainties at the policy of the robot controller, rather than the resultant observation. This is beneficial for implementing adaptive behaviors, which indices derivation of diverse motion during exploration.


Anticipation through Head Pose Estimation: a preliminary study

Tomenotti, Federico Figari, Noceti, Nicoletta

arXiv.org Artificial Intelligence

Abstract--The ability to anticipate others' goals and intentions More in detail, we hypothesize we can use the 3D Head I. Direction as a proxy of the gaze, and that by deriving simple A key element of natural human-human interaction is the visual geometrical cues in an unsupervised way - connecting ability to anticipate humans' goals and intentions [13]. The the head and hands of a subject with the elements in the same ability is paramount in different application domains environment - we can anticipate the goal of an action in terms - ranging from gaming to domotics and home assistance, of next active object or target position (when the movement to robotics. In the latter, in particular, anticipation abilities involves a change in location of objects). The goal is achieved may enable robots to seamlessly interact with humans in using object and human pose detectors, deriving the 3D head shared environments, enhancing safety, efficiency and fluidity pose and reasoning on the interaction between the human and in Human-Robot Interaction scenarios [8]. To test this hypothesis, Over the last years, the importance of leveraging non-verbal we conducted preliminary experiments using a private dataset cues for understanding humans' intentions has been well including videos of different subjects sitting in front of a table assessed [2, 3].


No-brainer: Morphological Computation driven Adaptive Behavior in Soft Robots

Mertan, Alican, Cheney, Nick

arXiv.org Artificial Intelligence

It is prevalent in contemporary AI and robotics to separately postulate a brain modeled by neural networks and employ it to learn intelligent and adaptive behavior. While this method has worked very well for many types of tasks, it isn't the only type of intelligence that exists in nature. In this work, we study the ways in which intelligent behavior can be created without a separate and explicit brain for robot control, but rather solely as a result of the computation occurring within the physical body of a robot. Specifically, we show that adaptive and complex behavior can be created in voxel-based virtual soft robots by using simple reactive materials that actively change the shape of the robot, and thus its behavior, under different environmental cues. We demonstrate a proof of concept for the idea of closed-loop morphological computation, and show that in our implementation, it enables behavior mimicking logic gates, enabling us to demonstrate how such behaviors may be combined to build up more complex collective behaviors. Keywords: Soft robotics Adaptive behavior 1 Introduction and Background Recent advances in artificial intelligence and machine learning have benefited greatly from the rise of modern deep learning systems, ultimately aimed at artificial general intelligence [22]. The coming-of-age of these artificial neural network systems includes a long history of bio-inspiration, dating back to Mcculloch and Pitts [26]. Yet the processes behind biological intelligence reach far beyond systems and processes confined to the brain of living organisms. Our bias toward attributing intelligent behavior to the mind is far from new.


Adaptive Manipulation using Behavior Trees

Cloete, Jacques, Merkt, Wolfgang, Havoutis, Ioannis

arXiv.org Artificial Intelligence

Many manipulation tasks use instances of a set of common motions, such as a twisting motion for tightening or loosening a valve. However, different instances of the same motion often require different environmental parameters (e.g. force/torque level), and thus different manipulation strategies to successfully complete; for example, grasping a valve handle from the side rather than head-on to increase applied torque. Humans can intuitively adapt their manipulation strategy to best suit such problems, but representing and implementing such behaviors for robots remains an open question. We present a behavior tree-based approach for adaptive manipulation, wherein the robot can reactively select from and switch between a discrete set of manipulation strategies during task execution. Furthermore, our approach allows the robot to learn from past attempts to optimize performance, for example learning the optimal strategy for different task instances. Our approach also allows the robot to preempt task failure and either change to a more feasible strategy or safely exit the task before catastrophic failure occurs. We propose a simple behavior tree design for general adaptive robot behavior and apply it in the context of industrial manipulation. The adaptive behavior outperformed all baseline behaviors that only used a single manipulation strategy, markedly reducing the number of attempts and overall time taken to complete the example tasks. Our results demonstrate potential for improved robustness and efficiency in task completion, reducing dependency on human supervision and intervention.


Learning Agile Locomotion and Adaptive Behaviors via RL-augmented MPC

Chen, Yiyu, Nguyen, Quan

arXiv.org Artificial Intelligence

In the context of legged robots, adaptive behavior involves adaptive balancing and adaptive swing foot reflection. While adaptive balancing counteracts perturbations to the robot, adaptive swing foot reflection helps the robot to navigate intricate terrains without foot entrapment. In this paper, we manage to bring both aspects of adaptive behavior to quadruped locomotion by combining RL and MPC while improving the robustness and agility of blind legged locomotion. This integration leverages MPC's strength in predictive capabilities and RL's adeptness in drawing from past experiences. Unlike traditional locomotion controls that separate stance foot control and swing foot trajectory, our innovative approach unifies them, addressing their lack of synchronization. At the heart of our contribution is the synthesis of stance foot control with swing foot reflection, improving agility and robustness in locomotion with adaptive behavior. A hallmark of our approach is robust blind stair climbing through swing foot reflection. Moreover, we intentionally designed the learning module as a general plugin for different robot platforms. We trained the policy and implemented our approach on the Unitree A1 robot, achieving impressive results: a peak turn rate of 8.5 rad/s, a peak running speed of 3 m/s, and steering at a speed of 2.5 m/s. Remarkably, this framework also allows the robot to maintain stable locomotion while bearing an unexpected load of 10 kg, or 83\% of its body mass. We further demonstrate the generalizability and robustness of the same policy where it realizes zero-shot transfer to different robot platforms like Go1 and AlienGo robots for load carrying. Code is made available for the use of the research community at https://github.com/DRCL-USC/RL_augmented_MPC.git


Heterogeneous Neural Networks for Adaptive Behavior in Dynamic Environments

Neural Information Processing Systems

Research in artificial neural networks has genera1ly emphasized homogeneous architectures. This heterogeneity is crucial to the flexible generation of behavior which is essential for survival in a complex, dynamic environment. It may also provide powerful insights into the design of artificial neural networks. In this paper, we describe a heterogeneous neural network for controlling the wa1king of a simulated insect. This controller is inspired by the neuroethological It exhibits a and neurobiological literature on insect locomotion.


Top 5 Reinforcement Learning Books

#artificialintelligence

Reinforcement Learning - over the last decade we have seen a lot of progress in use of reinforcement learning algorithms in settings when labeled data doesn't exist and a supverisde learning approach is not possible. The state of the art approach to tackling RL problems are Policy Gradients, which in combination with Monte Carlo Tree Search were employed by Google DeepMind's AlphaGo system to famously beat the Go world champion Lee Sedol. The readers will love our list because it is Data-Driven & Objective. Artificial Intelligence: A Modern Approach, 3e offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence. Dr. Peter Norvig, contributing Artificial Intelligence author and Professor Sebastian Thrun, a Pearson author are offering a free online course at Stanford University on artificial intelligence.