Goto

Collaborating Authors

 maze exploration


Multi-robot maze exploration using an efficient cost-utility method

Linardakis, Manousos, Varlamis, Iraklis, Papadopoulos, Georgios Th.

arXiv.org Artificial Intelligence

In the field of modern robotics, robots are proving to be useful in tackling high-risk situations, such as navigating hazardous environments like burning buildings, earthquake-stricken areas, or patrolling crime-ridden streets, as well as exploring uncharted caves. These scenarios share similarities with maze exploration problems in terms of complexity. While several methods have been proposed for single-agent systems, ranging from potential fields to flood-fill methods, recent research endeavors have focused on creating methods tailored for multiple agents to enhance the quality and efficiency of maze coverage. The contribution of this paper is the implementation of established maze exploration methods and their comparison with a new cost-utility algorithm designed for multiple agents, which combines the existing methodologies to optimize exploration outcomes. Through a comprehensive and comparative analysis, this paper evaluates the performance of the new approach against the implemented baseline methods from the literature, highlighting its efficacy and potential advantages in various scenarios. The code and experimental results supporting this study are available in the following repository (https://github.com/manouslinard/multiagent-exploration/).


Distributed maze exploration using multiple agents and optimal goal assignment

Linardakis, Manousos, Varlamis, Iraklis, Papadopoulos, Georgios Th.

arXiv.org Artificial Intelligence

Robotic exploration has long captivated researchers aiming to map complex environments efficiently. Techniques such as potential fields and frontier exploration have traditionally been employed in this pursuit, primarily focusing on solitary agents. Recent advancements have shifted towards optimizing exploration efficiency through multiagent systems. However, many existing approaches overlook critical real-world factors, such as broadcast range limitations, communication costs, and coverage overlap. This paper addresses these gaps by proposing a distributed maze exploration strategy (CU-LVP) that assumes constrained broadcast ranges and utilizes Voronoi diagrams for better area partitioning. By adapting traditional multiagent methods to distributed environments with limited broadcast ranges, this study evaluates their performance across diverse maze topologies, demonstrating the efficacy and practical applicability of the proposed method. The code and experimental results supporting this study are available in the following repository: https://github.com/manouslinard/multiagent-exploration/.


Fast algorithm for centralized multi-agent maze exploration

Crnković, Bojan, Ivić, Stefan, Zovko, Mila

arXiv.org Artificial Intelligence

Recent advancements in robotics have paved the way for robots to replace humans in perilous situations, such as searching for victims in blazing buildings, earthquake-damaged structures, uncharted caves, traversing minefields, or patrolling crime-ridden streets. These challenges can be generalized as problems where agents need to explore unknown mazes. Although various algorithms for single-agent maze exploration exist, extending them to multi-agent systems poses complexities. We propose a solution: a cooperative multi-agent system of automated mobile agents for exploring unknown mazes and locating stationary targets. Our algorithm employs a potential field governing maze exploration, integrating cooperative agent behaviors like collision avoidance, coverage coordination, and path planning. This approach builds upon the Heat Equation Driven Area Coverage (HEDAC) method by Ivi\'c, Crnkovi\'c, and Mezi\'c. Unlike previous continuous domain applications, we adapt HEDAC for discrete domains, specifically mazes divided into nodes. Our algorithm is versatile, easily modified for anti-collision requirements, and adaptable to expanding mazes and numerical meshes over time. Comparative evaluations against alternative maze-solving methods illustrate our algorithm's superiority. The results highlight significant enhancements, showcasing its applicability across diverse mazes. Numerical simulations affirm its robustness, adaptability, scalability, and simplicity, enabling centralized parallel computation in autonomous systems of basic agents/robots.


Deep Surrogate Assisted Generation of Environments

Bhatt, Varun, Tjanaka, Bryon, Fontaine, Matthew C., Nikolaidis, Stefanos

arXiv.org Artificial Intelligence

Recent progress in reinforcement learning (RL) has started producing generally capable agents that can solve a distribution of complex environments. These agents are typically tested on fixed, human-authored environments. On the other hand, quality diversity (QD) optimization has been proven to be an effective component of environment generation algorithms, which can generate collections of high-quality environments that are diverse in the resulting agent behaviors. However, these algorithms require potentially expensive simulations of agents on newly generated environments. We propose Deep Surrogate Assisted Generation of Environments (DSAGE), a sample-efficient QD environment generation algorithm that maintains a deep surrogate model for predicting agent behaviors in new environments. Results in two benchmark domains show that DSAGE significantly outperforms existing QD environment generation algorithms in discovering collections of environments that elicit diverse behaviors of a state-of-the-art RL agent and a planning agent. Our source code and videos are available at https://dsagepaper.github.io/.