Over the last five years, and while developing an architecture for autonomous service robots in human environments, we have identified several key decisional issues that are to be tackled for a cognitive robot to share space and tasks with a human. We introduce some of them here: situation assessment and mutual modelling, management and exploitation of each agent (human and robot) knowledge in separate cognitive models, natural multi-modal communication, "human-aware" task planning, and human and robot interleaved plan achievement. As a general "take home" message, it appears that explicit knowledge management, both symbolic and geometric, proves to be a successful key while attempting to address these challenges, as it pushes for a different, more semantic way to address the decision-making issue in human-robot interactions.
As far-fetched as it may seem today, there are a couple of compelling reasons why some humans may one day be born without either a mother or father as we now know them, and with no other humans around to bring them up. The first is the uninhabitable Earth scenario: doomsday. This is the idea that one day our planet will not be able to support human life. This may be due to catastrophic climate change brought on by a large asteroid or comet impact, a nuclear winter following a global nuclear war or a pandemic so severe that humans do not survive. Whatever the cause of our demise, if humans want to ultimately survive and one day re-emerge, it makes sense to store the building blocks of people – ovum and sperm – ready for a resurrection of the human race once our planet is habitable again.
Initiation of engagement between humans is a rich and complex process. Providing a humanoid robot with the ability to participate adequately in initiating engagement with a human offers some exciting challenges in designing the robot's behaviors and in designing the evaluation experiments to test initiation.
Ososky, Scott (University of Central Florida) | Schuster, David (University of Central Florida) | Phillips, Elizabeth (University of Central Florida) | Jentsch, Florian G (University of Central Florida)
Future robotic systems are expected to transition from tools to teammates , characterized by increasingly autonomous, intelligent robots interacting with humans in a more naturalistic manner, approaching a relationship more akin to human–human teamwork. Given the impact of trust observed in other systems, trust in the robot team member will likely be critical to effective and safe performance. Our thesis for this paper is that trust in a robot team member must be appropriately calibrated rather than simply maximized. We describe how the human team member’s understanding of the system contributes to trust in human-robot teaming, by evoking mental model theory. We discuss how mental models are related to physical and behavioral characteristics of the robot, on the one hand, and affective and behavioral outcomes, such as trust and system use/disuse/misuse, on the other. We expand upon our discussion by providing recommendations for best practices in human-robot team research and design and other systems using artificial intelligence.