Goto

Collaborating Authors

 Ribeiro, Tiago


Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale Annotations

arXiv.org Artificial Intelligence

Rationales in the form of manually annotated input spans usually serve as ground truth when evaluating explainability methods in NLP. They are, however, time-consuming and often biased by the annotation process. In this paper, we debate whether human gaze, in the form of webcam-based eye-tracking recordings, poses a valid alternative when evaluating importance scores. We evaluate the additional information provided by gaze data, such as total reading times, gaze entropy, and decoding accuracy with respect to human rationale annotations. We compare WebQAmGaze, a multilingual dataset for information-seeking QA, with attention and explainability-based importance scores for 4 different multilingual Transformer-based language models (mBERT, distil-mBERT, XLMR, and XLMR-L) and 3 languages (English, Spanish, and German). Our pipeline can easily be applied to other tasks and languages. Our findings suggest that gaze data offers valuable linguistic insights that could be leveraged to infer task difficulty and further show a comparable ranking of explainability methods to that of human rationales.


WebQAmGaze: A Multilingual Webcam Eye-Tracking-While-Reading Dataset

arXiv.org Artificial Intelligence

We create WebQAmGaze, a multilingual low-cost eye-tracking-while-reading dataset, designed to support the development of fair and transparent NLP models. WebQAmGaze includes webcam eye-tracking data from 332 participants naturally reading English, Spanish, and German texts. Each participant performs two reading tasks composed of five texts, a normal reading and an information-seeking task. After preprocessing the data, we find that fixations on relevant spans seem to indicate correctness when answering the comprehension questions. Additionally, we perform a comparative analysis of the data collected to high-quality eye-tracking data. The results show a moderate correlation between the features obtained with the webcam-ET compared to those of a commercial ET device. We believe this data can advance webcam-based reading studies and open a way to cheaper and more accessible data collection. WebQAmGaze is useful to learn about the cognitive processes behind question answering (QA) and to apply these insights to computational models of language understanding.


A Social Robot as a Card Game Player

AAAI Conferences

This paper describes a social robotic game player that is able to successfully play a team card game called Sueca. The question we will address in this paper is: how can we build a social robot player that is able to balance its ability to play the card game with natural and social behaviours towards its partner and its opponents. The first challenge we faced concerned the development of a competent artificial player for a hidden information game, whose time constraint is the average human decision time. To accomplish this requirement, the Perfect Information Monte Carlo (PIMC) algorithm was used. Further, we have performed an analysis of this algorithm's possible parametrizations for games trees that cannot be fully explored in a reasonable amount of time with a MinMax search. Additionally, given the nature of the Sueca game, such robotic player must master the social interactions both as a partner and as an opponent. To do that, an emotional agent framework (FAtiMA) was used to build the emotional and social behaviours of the robot. At each moment, the robot not only plays competitively but also appraises the situation and responds emotionally in a natural manner. To test the approach, we conducted a user study and compared the levels of trust participants attributed to the robots and to human partners. Results have shown that the robot team exhibited a winning rate of 60%. Concerning the social aspects, the results also showed that human players increased their trust in the robot as their game partners (similar to the way to the trust levels change towards human partners).


The SERA Ecosystem: Socially Expressive Robotics Architecture for Autonomous Human-Robot Interaction

AAAI Conferences

Based on the development of several different HRI scenarios using different robots, we have been establishing the SERA ecosystem. SERA is composed of both a model and tools for integrating an AI agent with a robotic embodiment, in humanrobot interaction scenarios. We present the model, and several of the reusable tools that were developed, namely Thalamus, Skene and Nutty Tracks. Finally we exemplify how such tools and model have been used and integrated in five different HRI scenarios using the NAO, Keepon and EMYS robots. Figure 1: Our methodology as an intersection of CGI animation, Human-robot interaction (HRI) systems are spreading as a IVA and robotics techniques.


Make Way for the Robot Animators! Bringing Professional Animators and AI Programmers Together in the Quest for the Illusion of Life in Robotic Characters

AAAI Conferences

We are looking at new ways of building algorithms for synthesizing and rendering animation in social robots that can keep them as interactive as necessary, while still following on principles and practices used by professional animators. We will be studying the animation process side by side with professional animators in order to understand how these algorithms and tools can be used by animators to achieve animation capable of correctly adapting to the environment and the artificial intelligence that controls the robot. Figure 1: Two example scenarios featuring a touch-based Robotic characters are becoming widespread as useful multimedia application, sensors, and different robots.


Meet Me Halfway: Eye Behaviour as an Expression of Robot's Language

AAAI Conferences

Eye contact is a crucial behaviour in human communication and therefore an essencial feature in human-robot interaction. A study regarding the development of an eye behaviour model for a robotic tutor in a task-oriented environment is presented, along with a description of how our proposed model is being used to implement an autonomous robot in the EMOTE project.