Azadeh, Reza
Advances in Multi-agent Reinforcement Learning: Persistent Autonomy and Robot Learning Lab Report 2024
Azadeh, Reza
Multi-Agent Reinforcement Learning (MARL) approaches have emerged as popular solutions to address the general challenges of cooperation in multi-agent environments, where the success of achieving shared or individual goals critically depends on the coordination and collaboration between agents. However, existing cooperative MARL methods face several challenges intrinsic to multi-agent systems, such as the curse of dimensionality, non-stationarity, and the need for a global exploration strategy. Moreover, the presence of agents with constraints (e.g., limited battery life, restricted mobility) or distinct roles further exacerbates these challenges. This document provides an overview of recent advances in Multi-Agent Reinforcement Learning (MARL) conducted at the Persistent Autonomy and Robot Learning (PeARL) lab at the University of Massachusetts Lowell. We briefly discuss various research directions and present a selection of approaches proposed in our most recent publications. For each proposed approach, we also highlight potential future directions to further advance the field.
Relational Weight Optimization for Enhancing Team Performance in Multi-Agent Multi-Armed Bandits
Kotturu, Monish Reddy, Movahed, Saniya Vahedian, Robinette, Paul, Jerath, Kshitij, Redlich, Amanda, Azadeh, Reza
Using a graph to represent the team behavior ensures that the relationship between Multi-Armed Bandits (MABs) are a class of reinforcement the agents are held. However, existing works either do learning problems where an agent is presented with a set of not consider the weight of each relationship (graph edges) arms (i.e., actions), with each arm giving a reward drawn (Madhushani and Leonard, 2020; Agarwal et al., 2021) or from a probability distribution unknown to the agent expect the user to manually set those weights (Moradipari (Lattimore and Szepesvรกri, 2020). The goal of the agent et al., 2022). is to maximize its total reward which requires balancing In this paper, we propose a new approach that combines exploration and exploitation. MABs offer a simple model graph optimization and MAMAB algorithms to enhance to simulate decision-making under uncertainty. Practical team performance by expediting the convergence to consensus applications of MAB algorithms include news recommendations of arm means. Our proposed approach: (Yang and Toni, 2018), online ad placement (Aramayo et al., 2022), dynamic pricing (Babaioff et al., 2015), improves team performance by optimizing the edge and adaptive experimental design (Rafferty et al., 2019). In weights in the graph representing the team structure contrast to single-agent cases, in certain applications such in large constrained teams, as search and rescue, a team of agents should cooperate does not require manual tuning of the graph weights, with each other to accomplish goals by maximizing team is independent of the MAMAB algorithm and only performance. Such problems are solved using Multi-Agent depends on the consensus formula, and Multi-Armed Bandit (MAMAB) algorithms (Xu et al., formulates the problem as a convex optimization, which 2020). Most existing algorithms rely on the presence of is computationally efficient for large teams.
Investigating the Generalizability of Assistive Robots Models over Various Tasks
Osooli, Hamid, Coco, Christopher, Spanos, Johnathan, Majdi, Amin, Azadeh, Reza
In the domain of assistive robotics, the significance of effective modeling is well acknowledged. Prior research has primarily focused on enhancing model accuracy or involved the collection of extensive, often impractical amounts of data. While improving individual model accuracy is beneficial, it necessitates constant remodeling for each new task and user interaction. In this paper, we investigate the generalizability of different modeling methods. We focus on constructing the dynamic model of an assistive exoskeleton using six data-driven regression algorithms. Six tasks are considered in our experiments, including horizontal, vertical, diagonal from left leg to the right eye and the opposite, as well as eating and pushing. We constructed thirty-six unique models applying different regression methods to data gathered from each task. Each trained model's performance was evaluated in a cross-validation scenario, utilizing five folds for each dataset. These trained models are then tested on the other tasks that the model is not trained with. Finally the models in our study are assessed in terms of generalizability. Results show the superior generalizability of the task model performed along the horizontal plane, and decision tree based algorithms.
An Adaptive Framework for Manipulator Skill Reproduction in Dynamic Environments
Donald, Ryan, Hertel, Brendan, Misenti, Stephen, Gu, Yan, Azadeh, Reza
Robot skill learning and execution in uncertain and dynamic environments is a challenging task. This paper proposes an adaptive framework that combines Learning from Demonstration (LfD), environment state prediction, and high-level decision making. Proactive adaptation prevents the need for reactive adaptation, which lags behind changes in the environment rather than anticipating them. We propose a novel LfD representation, Elastic-Laplacian Trajectory Editing (ELTE), which continuously adapts the trajectory shape to predictions of future states. Then, a high-level reactive system using an Unscented Kalman Filter (UKF) and Hidden Markov Model (HMM) prevents unsafe execution in the current state of the dynamic environment based on a discrete set of decisions. We first validate our LfD representation in simulation, then experimentally assess the entire framework using a legged mobile manipulator in 36 real-world scenarios. We show the effectiveness of the proposed framework under different dynamic changes in the environment. Our results show that the proposed framework produces robust and stable adaptive behaviors.
Design of Fuzzy Logic Parameter Tuners for Upper-Limb Assistive Robots
Coco, Christopher Jr., Spanos, Jonathan, Osooli, Hamid, Azadeh, Reza
Assistive Exoskeleton Robots are helping restore functions to people suffering from underlying medical conditions. These robots require precise tuning of hyper-parameters to feel natural to the user. The device hyper-parameters often need to be re-tuned from task to task, which can be tedious and require expert knowledge. To address this issue, we develop a set of fuzzy logic controllers that can dynamically tune robot gain parameters to adapt its sensitivity to the user's intention determined from muscle activation. The designed fuzzy controllers benefit from a set of expert-defined rules and do not rely on extensive amounts of training data. We evaluate the designed controllers with three different tasks and compare our results against the manually tuned system. Our preliminary results show that our controllers reduce the amount of fighting between the device and the human, measured using a set of pressure sensors.