Goto

Collaborating Authors

 decision state


Modeling AI-Human Collaboration as a Multi-Agent Adaptation

Sen, Prothit, Jakkaraju, Sai Mihir

arXiv.org Artificial Intelligence

We develop an agent-based simulation to formalize AI-human collaboration as a function of task structure, advancing a generalizable framework for strategic decision-making in organizations. Distinguishing between heuristic-based human adaptation and rule-based AI search, we model interactions across modular (parallel) and sequenced (interdependent) tasks using an NK model. Our results reveal that in modular tasks, AI often substitutes for humans - delivering higher payoffs unless human expertise is very high, and the AI search space is either narrowly focused or extremely broad. In sequenced tasks, interesting complementarities emerge. When an expert human initiates the search and AI subsequently refines it, aggregate performance is maximized. Conversely, when AI leads, excessive heuristic refinement by the human can reduce payoffs. We also show that even "hallucinatory" AI - lacking memory or structure - can improve outcomes when augmenting low-capability humans by helping escape local optima. These results yield a robust implication: the effectiveness of AI-human collaboration depends less on context or industry, and more on the underlying task structure. By elevating task decomposition as the central unit of analysis, our model provides a transferable lens for strategic decision-making involving humans and an agentic AI across diverse organizational settings.


The configurable tree graph (CT-graph): measurable problems in partially observable and distal reward environments for lifelong reinforcement learning

Soltoggio, Andrea, Ben-Iwhiwhu, Eseoghene, Peridis, Christos, Ladosz, Pawel, Dick, Jeffery, Pilly, Praveen K., Kolouri, Soheil

arXiv.org Artificial Intelligence

Many real-world problems are characterized by a large number of observations, confounding and spurious correlations, partially observable states, and distal, dynamic rewards with hierarchical reward structures. Such conditions make it hard for both animal and machines to learn complex skills. The learning process requires discovering what is important and what can be ignored, how the reward function is structured, and how to reuse knowledge across different tasks that share common properties. For these reasons, the application of standard reinforcement learning (RL) algorithms (Sutton and Barto, 2018) to solve structured problems is often not effective. Limitations of current RL algorithms include the problem of exploration with sparse rewards (Pathak et al., 2017), dealing with partially observable Markov decision problems (POMDP) (Ladosz et al., 2021), coping with large amounts of confounding stimuli (Thrun, 2000; Kim et al., 2019), and reusing skills for efficiently learning multiple task in a lifelong learning setting (Mendez and Eaton, 2020). Standard reinforcement learning algorithms are best suited when the problem can be formulated as a single-task problem in observable Markov decision problem (MDP). Under these assumptions, with complete observability and with static and frequent rewards, deep reinforcement learning (DRL) (Mnih et al., 2015; Li, 2017) has gained popularity due to the ability to learn an approximated Q-value function directly from raw pixel data in the Atari 2600 platform. This and similar algorithms stack multiple frames to derive states of an MDP, and use a basic ɛ-greedy exploration policy. In more complex cases with partial observability and sparse rewards, extensions have been proposed to include more advanced exploration techniques (Ladosz et al., 2022), e.g.


Online Shielding for Reinforcement Learning

Könighofer, Bettina, Rudolf, Julian, Palmisano, Alexander, Tappler, Martin, Bloem, Roderick

arXiv.org Artificial Intelligence

Besides the recent impressive results on reinforcement learning (RL), safety is still one of the major research challenges in RL. RL is a machine-learning approach to determine near-optimal policies in Markov decision processes (MDPs). In this paper, we consider the setting where the safety-relevant fragment of the MDP together with a temporal logic safety specification is given and many safety violations can be avoided by planning ahead a short time into the future. We propose an approach for online safety shielding of RL agents. During runtime, the shield analyses the safety of each available action. For any action, the shield computes the maximal probability to not violate the safety specification within the next $k$ steps when executing this action. Based on this probability and a given threshold, the shield decides whether to block an action from the agent. Existing offline shielding approaches compute exhaustively the safety of all state-action combinations ahead of time, resulting in huge computation times and large memory consumption. The intuition behind online shielding is to compute at runtime the set of all states that could be reached in the near future. For each of these states, the safety of all available actions is analysed and used for shielding as soon as one of the considered states is reached. Our approach is well suited for high-level planning problems where the time between decisions can be used for safety computations and it is sustainable for the agent to wait until these computations are finished. For our evaluation, we selected a 2-player version of the classical computer game SNAKE. The game represents a high-level planning problem that requires fast decisions and the multiplayer setting induces a large state space, which is computationally expensive to analyse exhaustively.


Evolutionary Processes in Quantum Decision Theory

Yukalov, V. I.

arXiv.org Artificial Intelligence

In recent years, there has appeared high interest to the possibility of formulating decision theory in the language of quantum mechanics. Numerous references on this topic can be found in the books [1-4] and review articles [5-8]. This interest is caused by the inability of classical decision theory [9] to comply with the behaviour of real decision makers, which requires to develop other approaches. Resorting to the techniques of quantum theory gives hopes for a better representation of behavioral decision making. There are several variants of using quantum mechanics for interpreting conscious effects.


Unsupervised Discovery of Decision States for Transfer in Reinforcement Learning

Modhe, Nirbhay, Chattopadhyay, Prithvijit, Sharma, Mohit, Das, Abhishek, Parikh, Devi, Batra, Dhruv, Vedantam, Ramakrishna

arXiv.org Machine Learning

We present a hierarchical reinforcement learning (HRL) or options framework for identifying decision states. Informally speaking, these are states considered important by the agent's policy e.g. , for navigation, decision states would be crossroads or doors where an agent needs to make strategic decisions. While previous work (most notably Goyal et. al., 2019) discovers decision states in a task/goal specific (or 'supervised') manner, we do so in a goal-independent (or 'unsupervised') manner, i.e. entirely without any goal or extrinsic rewards. Our approach combines two hitherto disparate ideas - 1) \emph{intrinsic control} (Gregor et. al., 2016, Eysenbach et. al., 2018): learning a set of options that allow an agent to reliably reach a diverse set of states, and 2) \emph{information bottleneck} (Tishby et. al., 2000): penalizing mutual information between the option $\Omega$ and the states $s_t$ visited in the trajectory. The former encourages an agent to reliably explore the environment; the latter allows identification of decision states as the ones with high mutual information $I(\Omega; a_t | s_t)$ despite the bottleneck. Our results demonstrate that 1) our model learns interpretable decision states in an unsupervised manner, and 2) these learned decision states transfer to goal-driven tasks in new environments, effectively guide exploration, and improve performance.


InfoBot: Transfer and Exploration via the Information Bottleneck

Goyal, Anirudh, Islam, Riashat, Strouse, Daniel, Ahmed, Zafarali, Botvinick, Matthew, Larochelle, Hugo, Bengio, Yoshua, Levine, Sergey

arXiv.org Machine Learning

A central challenge in reinforcement learning is discovering effective policies for tasks where rewards are sparsely distributed. We postulate that in the absence of useful reward signals, an effective exploration strategy should seek out {\it decision states}. These states lie at critical junctions in the state space from where the agent can transition to new, potentially unexplored regions. We propose to learn about decision states from prior experience. By training a goal-conditioned policy with an information bottleneck, we can identify decision states by examining where the model actually leverages the goal state. We find that this simple mechanism effectively identifies decision states, even in partially observed settings. In effect, the model learns the sensory cues that correlate with potential subgoals. In new environments, this model can then identify novel subgoals for further exploration, guiding the agent through a sequence of potential decision states and through new regions of the state space.


Integrating Episodic Memory into a Reinforcement Learning Agent using Reservoir Sampling

Young, Kenny J., Sutton, Richard S., Yang, Shuo

arXiv.org Machine Learning

Episodic memory is a psychology term which refers to the ability to recall specific events from the past. We suggest one advantage of this particular type of memory is the ability to easily assign credit to a specific state when remembered information is found to be useful. Inspired by this idea, and the increasing popularity of external memory mechanisms to handle long-term dependencies in deep learning systems, we propose a novel algorithm which uses a reservoir sampling procedure to maintain an external memory consisting of a fixed number of past states. The algorithm allows a deep reinforcement learning agent to learn online to preferentially remember those states which are found to be useful to recall later on. Critically this method allows for efficient online computation of gradient estimates with respect to the write process of the external memory. Thus unlike most prior mechanisms for external memory it is feasible to use in an online reinforcement learning setting. Much of reinforcement learning (RL) theory is based on the assumption that the environment has the Markov property, meaning that future states are independent of past states given the present state. This implies the agent has all the information it needs to make an optimal decision at each time and therefore has no need to remember the past. This is however not realistic in general, realistic problems often require significant information from the past to make an informed decision in the present, and there is often no obvious way to incorporate the relevant information into an expanded present state.


Automata Modeling for Cognitive Interference in Users' Relevance Judgment

Zhang, Peng (The Robert Gordon University) | Song, Dawei (The Robert Gordon University) | Hou, Yuexian (Tianjin University) | Wang, Jun (Robert Gordon University) | Bruza, Peter (Queensland University of Technology)

AAAI Conferences

Quantum theory has recently been employed to further advance thetheory of information retrieval (IR). A challenging research topicis to investigate the so called quantum-like interference in users'relevance judgment process, where users are involved to judge therelevance degree of each document with respect to a given query. Inthis process, users' relevance judgment for the current document isoften interfered by the judgment for previous documents, due to theinterference on users' cognitive status. Research from cognitivescience has demonstrated some initial evidence of quantum-likecognitive interference in human decision making, which underpins theuser's relevance judgment process. This motivates us to model suchcognitive interference in the relevance judgment process, which inour belief will lead to a better modeling and explanation of userbehaviors in relevance judgement process for IR and eventually leadto more user-centric IR models. In this paper, we propose to useprobabilistic automaton (PA) and quantum finite automaton (QFA),which are suitable to represent the transition of user judgmentstates, to dynamically model the cognitive interference when theuser is judging a list of documents.