We present an approach for learning grounded language from mixed-initiative human-robot interaction. Prior work on learning from human instruction has concentrated on acquisition of task-execution knowledge from domain-specific language. In this work, we demonstrate acquisition of linguistic, semantic, perceptual, and procedural knowledge from mixed-initiative, natural language dialog. Our approach has been instantiated in a cognitive architecture, Soar, and has been deployed on a table-top robotic arm capable of picking up small objects. A preliminary analysis verifies the ability of the robot to acquire diverse knowledge from human-robot interaction.
In this paper, we describe a complex Soar agent that uses and learns multiple types of knowledge while interacting with a human in a real-world domain. Our hypothesis is that a diverse set of memories is required for the different types of knowledge. We first present the agent’s processing, highlighting the types of knowledge used for each phase. We then present Soar’s memories and identify which memory is used for each type of knowledge. We also analyze which properties of each memory make it appropriate for the knowledge it encodes. We conclude with a summary of our analysis.
This paper discusses the challenge of designing instructable agents that can learn through interaction with a human expert. Learning through instruction is a powerful paradigm for acquiring knowledge because it limits the complexity of the learning task in a variety of ways. To support learning through instruction, the agent must be able to effectively communicate its lack of knowledge to the human, comprehend instructions, and apply them to the ongoing task. Weidentify some problems of concern when designing instructable agents. We propose an agent design that addresses some of these problems. We instantiate this design in the Soar cognitive architecture and analyze its capabilities on a learning task.
Linguistic communication relies on non-linguistic context toconvey meaning. That context might include, for instance, recent orlong-term experience, semantic knowledge of the world, or objects and events in the immediate environment. In this paper, we describe embodied agents instantiated in Soar cognitive architecture that use context derived from their linguistic, perceptual, procedural and semantic knowledge for comprehending imperative sentences.
This paper proposes that the “right” abstraction for representing general intelligence depends on the timescale of behavior under study (Newell 1990) and overall goals of the research – is it to faithfully model the brain, the mind, or to achieve the same functionality? I briefly describe my approach, which focuses on functionality and time scales above .1 seconds. My strategy is to draw inspiration from neuroscience and cognitive psychology to achieve general intelligence through the study and development of the Soar symbolic cognitive architecture.