Goto

Collaborating Authors

 Edelkamp, Stefan


Planning with Vision-Language Models and a Use Case in Robot-Assisted Teaching

arXiv.org Artificial Intelligence

Automating the generation of Planning Domain Definition Language (PDDL) with Large Language Model (LLM) opens new research topic in AI planning, particularly for complex real-world tasks. This paper introduces Image2PDDL, a novel framework that leverages Vision-Language Models (VLMs) to automatically convert images of initial states and descriptions of goal states into PDDL problems. By providing a PDDL domain alongside visual inputs, Imasge2PDDL addresses key challenges in bridging perceptual understanding with symbolic planning, reducing the expertise required to create structured problem instances, and improving scalability across tasks of varying complexity. We evaluate the framework on various domains, including standard planning domains like blocksworld and sliding tile puzzles, using datasets with multiple difficulty levels. Performance is assessed on syntax correctness, ensuring grammar and executability, and content correctness, verifying accurate state representation in generated PDDL problems. The proposed approach demonstrates promising results across diverse task complexities, suggesting its potential for broader applications in AI planning. We will discuss a potential use case in robot-assisted teaching of students with Autism Spectrum Disorder.


Heuristic Planner for Communication-Constrained Multi-Agent Multi-Goal Path Planning

arXiv.org Artificial Intelligence

Abstract-- In robotics, coordinating a group of robots is an essential task. This work presents the communicationconstrained multi-agent multi-goal path planning problem and proposes a graph-search based algorithm to address this task. Given a fleet of robots, an environment represented by a weighted graph, and a sequence of goals, the aim is to visit all the goals without breaking the communication constraints between the agents, minimizing the completion time. While the red agent visits the first goal, the other two agents position themselves favorably with respect I. As long as the communication remains are many ways the agents might interact with each other unbroken, the whole system works as if each of the robots and their environment, and there are many limitations one had access to the computational power of the arbiter. This work is motivated by the constraint of As a similar example, imagine a mother ship-style drone limited communication distance. It establishes the problem that sends out small drones.


SIL-RRT*: Learning Sampling Distribution through Self Imitation Learning

arXiv.org Artificial Intelligence

Efficiently finding safe and feasible trajectories for mobile objects is a critical field in robotics and computer science. In this paper, we propose SIL-RRT*, a novel learning-based motion planning algorithm that extends the RRT* algorithm by using a deep neural network to predict a distribution for sampling at each iteration. We evaluate SIL-RRT* on various 2D and 3D environments and establish that it can efficiently solve high-dimensional motion planning problems with fewer samples than traditional sampling-based algorithms. Moreover, SIL-RRT* is able to scale to more complex environments, making it a promising approach for solving challenging robotic motion planning problems.


CLIP-Motion: Learning Reward Functions for Robotic Actions Using Consecutive Observations

arXiv.org Artificial Intelligence

This paper presents a novel method for learning reward functions for robotic motions by harnessing the power of a CLIP-based model. Traditional reward function design often hinges on manual feature engineering, which can struggle to generalize across an array of tasks. Our approach circumvents this challenge by capitalizing on CLIP's capability to process both state features and image inputs effectively. Given a pair of consecutive observations, our model excels in identifying the motion executed between them. We showcase results spanning various robotic activities, such as directing a gripper to a designated target and adjusting the position of a cube. Through experimental evaluations, we underline the proficiency of our method in precisely deducing motion and its promise to enhance reinforcement learning training in the realm of robotics. Reinforcement Learning (RL) distinguishes itself within the machine learning spectrum by enabling an agent to determine optimal decisions through interactions with an environment, while targeting maximal cumulative reward. The linchpin of this approach is the reward mechanism, which steers the agent's decision-making. Each reward, be it positive or negative, influences the agent's action choices. Essentially, the reward system is RL's guiding compass, directing the agent toward optimal actions and ensuring its adaptability in dynamic and unpredictable scenarios. Without it, the agent would drift aimlessly, unable to discern advantageous actions from detrimental ones.


Optimize Planning Heuristics to Rank, not to Estimate Cost-to-Goal

arXiv.org Artificial Intelligence

In imitation learning for planning, parameters of heuristic functions are optimized against a set of solved problem instances. This work revisits the necessary and sufficient conditions of strictly optimally efficient heuristics for forward search algorithms, mainly A* and greedy best-first search, which expand only states on the returned optimal path. It then proposes a family of loss functions based on ranking tailored for a given variant of the forward search algorithm. Furthermore, from a learning theory point of view, it discusses why optimizing cost-to-goal \hstar\ is unnecessarily difficult. The experimental comparison on a diverse set of problems unequivocally supports the derived theory.


Heuristic Search Planning with Deep Neural Networks using Imitation, Attention and Curriculum Learning

arXiv.org Artificial Intelligence

Learning a well-informed heuristic function for hard task planning domains is an elusive problem. Although there are known neural network architectures to represent such heuristic knowledge, it is not obvious what concrete information is learned and whether techniques aimed at understanding the structure help in improving the quality of the heuristics. This paper presents a network model to learn a heuristic capable of relating distant parts of the state space via optimal plan imitation using the attention mechanism, which drastically improves the learning of a good heuristic function. To counter the limitation of the method in the creation of problems of increasing difficulty, we demonstrate the use of curriculum learning, where newly solved problem instances are added to the training set, which, in turn, helps to solve problems of higher complexities and far exceeds the performances of all existing baselines including classical planning heuristics. We demonstrate its effectiveness for grid-type PDDL domains.


ELO System for Skat and Other Games of Chance

arXiv.org Artificial Intelligence

Assessing the skill level of players to predict the outcome and to rank the players in a longer series of games is of critical importance for tournament play. Besides weaknesses, like an observed continuous inflation, through a steadily increasing playing body, the ELO ranking system, named after its creator Arpad Elo, has proven to be a reliable method for calculating the relative skill levels of players in zero-sum games. The evaluation of player strength in trick-taking card games like Skat or Bridge, however, is not obvious. Firstly, these are incomplete information partially observable games with more than one player, where opponent strength should influence the scoring as it does in existing ELO systems. Secondly, they are game of both skill and chance, so that besides the playing strength the outcome of a game also depends on the deal. Last but not least, there are internationally established scoring systems, in which the players are used to be evaluated, and to which ELO should align. Based on a tournament scoring system, we propose a new ELO system for Skat to overcome these weaknesses.


On the Power of Refined Skat Selection

arXiv.org Artificial Intelligence

Skat is a fascinating combinatorial card game, show-casing many of the intrinsic challenges for modern AI systems such as cooperative and adversarial behaviors (among the players), randomness (in the deal), and partial knowledge (due to hidden cards). Given the larger number of tricks and higher degree of uncertainty, reinforcement learning is less effective compared to classical board games like Chess and Go. As within the game of Bridge, in Skat we have a bidding and trick-taking stage. Prior to the trick-taking and as part of the bidding process, one phase in the game is to select two skat cards, whose quality may influence subsequent playing performance drastically. This paper looks into different skat selection strategies. Besides predicting the probability of winning and other hand strength functions we propose hard expert-rules and a scoring functions based on refined skat evaluation features. Experiments emphasize the impact of the refined skat putting algorithm on the playing performance of the bots, especially for AI bidding and AI game selection.


Knowledge-Based Paranoia Search in Trick-Taking

arXiv.org Artificial Intelligence

This paper proposes \emph{knowledge-based paraonoia search} (KBPS) to find forced wins during trick-taking in the card game Skat; for some one of the most interesting card games for three players. It combines efficient partial information game-tree search with knowledge representation and reasoning. This worst-case analysis, initiated after a small number of tricks, leads to a prioritized choice of cards. We provide variants of KBPS for the declarer and the opponents, and an approximation to find a forced win against most worlds in the belief space. Replaying thousands of expert games, our evaluation indicates that the AIs with the new algorithms perform better than humans in their play, achieving an average score of over 1,000 points in the agreed standard for evaluating Skat tournaments, the extended Seeger system.


BDDs Strike Back (in AI Planning)

AAAI Conferences

The cost-optimal track of the international planning competition in 2014 has seen an unexpected outcome. Different to the precursing competition in 2011, where explicit-state heuristic search planning scored best, advances in the state-set exploration with BDDs showed a significant lead. In this paper we review the outcome of the competition, briefly looking into the internals of the competing systems.