Goto

Collaborating Authors

 interface and interactive vehicular application


Keep Calm and Relax -- HMI for Autonomous Vehicles

Yekta, Tima M., Schöning, Julius

arXiv.org Artificial Intelligence

The growing popularity of self-driving, so-called autonomous vehicles has increased the need for human-machine interfaces~(HMI) and user interaction~(UI) to enhance passenger trust and comfort. While fallback drivers significantly influence the perceived trustfulness of self-driving vehicles, fallback drivers are an expensive solution that may not even improve vehicle safety in emergency situations. Based on a comprehensive literature review, this work delves into the potential of HMI and UI in enhancing trustfulness and emotion regulation in driverless vehicles. By analyzing the impact of various HMI and UI on passenger emotions, innovative and cost-effective concepts for improving human-vehicle interaction are conceptualized. To enable a trustful, highly comfortable, and safe ride, this work concludes by discussing whether HMI and UI are suitable for calming passengers down in emergencies, leading to smarter mobility for all.


Adaptive User-centered Neuro-symbolic Learning for Multimodal Interaction with Autonomous Systems

Gomaa, Amr, Feld, Michael

arXiv.org Artificial Intelligence

Recent advances in machine learning, particularly deep learning, have enabled autonomous systems to perceive and comprehend objects and their environments in a perceptual subsymbolic manner. These systems can now perform object detection, sensor data fusion, and language understanding tasks. However, there is a growing need to enhance these systems to understand objects and their environments more conceptually and symbolically. It is essential to consider both the explicit teaching provided by humans (e.g., describing a situation or explaining how to act) and the implicit teaching obtained by observing human behavior (e.g., through the system's sensors) to achieve this level of powerful artificial intelligence. Thus, the system must be designed with multimodal input and output capabilities to support implicit and explicit interaction models. In this position paper, we argue for considering both types of inputs, as well as human-in-the-loop and incremental learning techniques, for advancing the field of artificial intelligence and enabling autonomous systems to learn like humans. We propose several hypotheses and design guidelines and highlight a use case from related work to achieve this goal.