Goto

Collaborating Authors

Building Consciousness

#artificialintelligence

A blast of old-fashioned optimism from Owen Holland: let's just build a conscious robot! It's a short video so Holland doesn't get much chance to back up his prediction that if you're under thirty you will meet a conscious robot. He voices feelings which I suspect are common on the engineering and robotics side of the house, if not usually expressed so clearly: why don't we just get on and put a machine together to do this? Philosophy, psychology, all that airy fairy stuff is getting us nowhere; we'll learn more from a bad robot than twenty papers on qualia. His basic idea is that we're essentially dealing with an internal model of the world.


How Robots Can Recognize Activities and Plans Using Topic Models

AAAI Conferences

The ability to identify what humans are doing in the environment is a crucial element of successful responsive behavior in human-robot interaction. We examine new ways to perform plan recognition (PR) using natural language processing (NLP) techniques. PR often focuses on the structural relationships between consecutive observations and ordered activities that comprise plans. However, NLP commonly treats text as a bag-of-words, omitting such structural relationships and using topic models to break down the distribution of concepts discussed in documents. In this paper, we examine an analogous treatment of plans as distributions of activities. We explore the application of Latent Dirichlet Allocation topic models to human skeletal data of plan execution traces obtained from a RGB-D sensor. This investigation focuses on representing the data as text and interpreting learned activities as a form of activity recognition (AR). Additionally, we explain how the system may perform PR. The initial empirical results suggest that such NLP methods can be useful in complex PR and AR tasks.


Learning Probabilistic Models for Mobile Manipulation Robots

AAAI Conferences

Mobile manipulation robots are envisioned to provide many useful services both in domestic environments as well as in the industrial context.  In this paper, we present novel approaches to allow mobile maniplation systems to autonomously adapt to new or changing situations. The approaches developed in this paper cover the following four topics: (1) learning the robot's kinematic structure and properties using actuation and visual feedback, (2) learning about articulated objects in the environment in which the robot is operating, (3) using tactile feedback to augment visual perception, and (4) learning novel manipulation tasks from human demonstrations.


Combining World and Interaction Models for Human-Robot Collaborations

AAAI Conferences

As robotic technologies mature, we can imagine an increasing number of applications in which robots could soon prove to be useful in unstructured human environments. Many of those applications require a natural interface between the robot and untrained human users or are possible only in a human-robot collaborative scenario. In this paper, we study an example of such scenario in which a visually impaired person and a robotic guide collaborate in an unfamiliar environment. We then analyze how the scenario can be realized through language- and gesture-based human-robot interaction, combined with semantic spatial understanding and reasoning, and propose an integration of semantic world model with language and gesture models for several collaboration modes. We believe that this way practical robotic applications can be achieved in human environments with the use of currently available technology.


Our New Model Robot Armies

#artificialintelligence

Small Wars Journal is published by Small Wars Foundation - a 501(c)(3) non-profit corporation. Original content is published under a Creative Commons License per our Terms of Use.