A Few AI Challenges Raised while Developing an Architecture for Human-Robot Cooperative Task Achievement

AAAI Conferences

Over the last five years, and while developing an architecture for autonomous service robots in human environments, we have identified several key decisional issues that are to be tackled for a cognitive robot to share space and tasks with a human. We introduce some of them here: situation assessment and mutual modelling, management and exploitation of each agent (human and robot) knowledge in separate cognitive models, natural multi-modal communication, "human-aware" task planning, and human and robot interleaved plan achievement. As a general "take home" message, it appears that explicit knowledge management, both symbolic and geometric, proves to be a successful key while attempting to address these challenges, as it pushes for a different, more semantic way to address the decision-making issue in human-robot interactions.


Why AI and robots can never replace human teachers

#artificialintelligence

I read with interest recent reports about China wanting to bring artificial intelligence (AI) to its classrooms to boost its education system.


Future Robots As Mothers And Fathers Depend On The Future Of Humans

International Business Times

As far-fetched as it may seem today, there are a couple of compelling reasons why some humans may one day be born without either a mother or father as we now know them, and with no other humans around to bring them up. The first is the uninhabitable Earth scenario: doomsday. This is the idea that one day our planet will not be able to support human life. This may be due to catastrophic climate change brought on by a large asteroid or comet impact, a nuclear winter following a global nuclear war or a pandemic so severe that humans do not survive. Whatever the cause of our demise, if humans want to ultimately survive and one day re-emerge, it makes sense to store the building blocks of people – ovum and sperm – ready for a resurrection of the human race once our planet is habitable again.


Integrating Knowledge Representation, Reasoning, and Learning for Human-Robot Interaction

AAAI Conferences

Robots interacting with humans often have to represent and reason with different descriptions of incomplete domain knowledge and uncertainty, and revise this knowledge over time. Towards achieving these capabilities, the architecture described in this paper combines the complementary strengths of declarative programming, probabilistic graphical models, and reinforcement learning. For any given goal, non-monotonic logical reasoning with a coarse-resolution representation of the domain is used to compute a tentative plan of abstract actions. Each abstract action is implemented as a sequence of concrete actions by reasoning probabilistically over the relevant part of a fine-resolution representation tightly-coupled to the coarse-resolution representation. The outcomes of executing the concrete actions are used for subsequenct reasoning at the coarse resolution. Furthermore, the task of interactively learning axioms governing action capabilities, preconditions and effects, is posed as a relational reinforcement learning problem, using decision tree regression and sampling to construct and generalize over candidate axioms. These capabilities are illustrated in simulation and on a physical robot moving objects to specific people or locations in an indoor domain.


Building Appropriate Trust in Human-Robot Teams

AAAI Conferences

Future robotic systems are expected to transition from tools to teammates , characterized by increasingly autonomous, intelligent robots interacting with humans in a more naturalistic manner, approaching a relationship more akin to human–human teamwork. Given the impact of trust observed in other systems, trust in the robot team member will likely be critical to effective and safe performance. Our thesis for this paper is that trust in a robot team member must be appropriately calibrated rather than simply maximized.  We describe how the human team member’s understanding of the system contributes to trust in human-robot teaming, by evoking mental model theory. We discuss how mental models are related to physical and behavioral characteristics of the robot, on the one hand, and affective and behavioral outcomes, such as trust and system use/disuse/misuse, on the other.  We expand upon our discussion by providing recommendations for best practices in human-robot team research and design and other systems using artificial intelligence.