Goto

Collaborating Authors

 Buche, Cédric


RoboCup@Home Education 2020 Best Performance: RoboBreizh, a modular approach

arXiv.org Artificial Intelligence

Every year, the Robocup@Home competition challenges teams and robots' abilities. In 2020, the RoboCup@Home Education challenge was organized online, altering the usual competition rules. In this paper, we present the latest developments that lead the RoboBreizh team to win the contest. These developments include several modules linked to each other allowing the Pepper robot to understand, act and adapt itself to a local environment. Up-to-date available technologies have been used for navigation and dialogue. First contribution includes combining object detection and pose estimation techniques to detect user's intention. Second contribution involves using Learning by Demonstrations to easily learn new movements that improve the Pepper robot's skills. This proposal won the best performance award of the 2020 RoboCup@Home Education challenge.


Hierarchical Affordance Discovery using Intrinsic Motivation

arXiv.org Artificial Intelligence

To be capable of lifelong learning in a real-life environment, robots have to tackle multiple challenges. Being able to relate physical properties they may observe in their environment to possible interactions they may have is one of them. This skill, named affordance learning, is strongly related to embodiment and is mastered through each person's development: each individual learns affordances differently through their own interactions with their surroundings. Current methods for affordance learning usually use either fixed actions to learn these affordances or focus on static setups involving a robotic arm to be operated. In this article, we propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot. This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions. Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties. We then present one experiment and analyse our system before comparing it with other approaches from reinforcement learning and affordance learning.


Data Clustering and Similarity

AAAI Conferences

In this article, we study the notion of similarity within the context of cluster analysis. We begin by studying different distances commonly used for this task and highlight certain important properties that they might have, such as the use of data distribution or reduced sensitivity to the curse of dimensionality. Then we study inter- and intra-cluster similarities. We identify how the choices made can influence the nature of the clusters.


Learning a Representation of a Believable Virtual Character's Environment with an Imitation Algorithm

arXiv.org Artificial Intelligence

In video games, virtual characters' decision systems often use a simplified representation of the world. To increase both their autonomy and believability we want those characters to be able to learn this representation from human players. We propose to use a model called growing neural gas to learn by imitation the topology of the environment. The implementation of the model, the modifications and the parameters we used are detailed. Then, the quality of the learned representations and their evolution during the learning are studied using different measures. Improvements for the growing neural gas to give more information to the character's model are given in the conclusion.


Automatable Evaluation Method Oriented toward Behaviour Believability for Video Games

arXiv.org Artificial Intelligence

Classic evaluation methods of believable agents are time-consuming because they involve many human to judge agents. They are well suited to validate work on new believable behaviours models. However, during the implementation, numerous experiments can help to improve agents' believability. We propose a method which aim at assessing how much an agent's behaviour looks like humans' behaviours. By representing behaviours with vectors, we can store data computed for humans and then evaluate as many agents as needed without further need of humans. We present a test experiment which shows that even a simple evaluation following our method can reveal differences between quite believable agents and humans. This method seems promising although, as shown in our experiment, results' analysis can be difficult.


The Challenge of Believability in Video Games: Definitions, Agents Models and Imitation Learning

arXiv.org Artificial Intelligence

In this paper, we address the problem of creating believable agents (virtual characters) in video games. We consider only one meaning of believability, ``giving the feeling of being controlled by a player'', and outline the problem of its evaluation. We present several models for agents in games which can produce believable behaviours, both from industry and research. For high level of believability, learning and especially imitation learning seems to be the way to go. We make a quick overview of different approaches to make video games' agents learn from players. To conclude we propose a two-step method to develop new models for believable agents. First we must find the criteria for believability for our application and define an evaluation method. Then the model and the learning algorithm can be designed.