Goto

Collaborating Authors

 Shukla, Nishant


Task Learning through Visual Demonstration and Situated Dialogue

AAAI Conferences

To enable effective collaborations between humans and cognitive robots, it is important for robots to continuously acquire task knowledge from human partners. To address this issue, we are currently developing a framework that supports task learning through visual demonstration and natural language dialogue. One core component of this framework is the integration of language and vision that is driven by dialogue for task knowledge learning. This paper describes our on-going effort, particularly, grounded task learning through joint processing of video and dialogue using And-Or-Graphs (AOG).


A Unified Framework for Human-Robot Knowledge Transfer

AAAI Conferences

Transferring knowledge is a vital skill between humans for efficiently learning a new concept. In a perfect system, a human demonstrator can teach a robot a new task by using natural language and physical gestures. The robot would gradually accumulate and refine its spatial, temporal, and causal understanding of the world. The knowledge can then be transferred back to another human, or further to another robot. The implications of effective human to robot knowledge transfer include the compelling opportunity of a robot acting as the teacher, guiding humans in new tasks. The technical difficulty in achieving a robot implementation Figure 1: The robot autonomously performs a cloth folding of this caliber involves both an expressive knowledge task after learning from a human demonstration.