Exploring Affordances Using Human-Guidance and Self-Exploration

AAAI Conferences

Our work is aimed at service robots deployed in human environments that will need many specialized object manipulation skill. We believe robots should leverage end-users to quickly and efficiently learn the affordances of objects in their environment. Prior work has shown that this approach is promising because people naturally focus on showing salient rare aspects ofthe objects (Thomaz and Cakmak 2009). We replicate these prior results and build on them to create a semi-supervised combination of self and guided learning.We compare three conditions: (1) learning through self-exploration, (2) learning from demonstrations providedby 10 naive users, and (3) self-exploration seeded with the user demonstrations. Initial results suggests benefits of a mixed initiative approach.


Affordance Templates for Shared Robot Control

AAAI Conferences

This paper introduces the Affordance Template framework used to supervise task behaviors on the NASA-JSC Valkyrie robot at the 2013 DARPA Robotics Challenge (DRC) Trials. This framework provides graphical interfaces to human supervisors that are adjustable based on the run-time environmental context (e.g., size, location, and shape of objects that the robot must interact with, etc.). Additional improvements, described below, inject degrees of autonomy into instantiations of affordance templates at run-time in order to enable efficient human supervision of the robot for accomplishing tasks.


Learning Predictive Features in Affordance-based Robotic Perception Systems

AAAI Conferences

This work is about the relevance of Gibson's concept of affordances [1] for visual perception in interactive and autonomous robotic systems. In extension to existing functional views on visual feature representations [9], we identify the importance of learning in perceptual cueing for the anticipation of opportunities for interaction of robotic agents. We investigate how the originally defined representational concept for the perception of affordances - in terms of using either optical flow or heuristically determined 3D features of perceptual entities - should be generalized to using arbitrary visual feature representations. In this context we demonstrate the learning of causal relationships between visual cues and predictable interactions, using both 3D and 2D information. In addition, we emphasize a new framework for cueing and recognition of affordance-like visual entities that could play an important role in future robot control architectures. We argue that affordancelike perception should enable systems to react on environment stimuli both more efficient and autonomous, and provide a potential to plan on the basis of responses on more complex perceptual configurations. We verify the concept with a concrete implementation for affordance learning, applying state-of-the-art visual descriptors that were extracted from a simulated robot scenario and prove that these features were successfully selected for their relevance in predicting opportunities of robot interaction.


Learning at the Ends: From Hand to Tool Affordances in Humanoid Robots

arXiv.org Machine Learning

One of the open challenges in designing robots that operate successfully in the unpredictable human environment is how to make them able to predict what actions they can perform on objects, and what their effects will be, i.e., the ability to perceive object affordances. Since modeling all the possible world interactions is unfeasible, learning from experience is required, posing the challenge of collecting a large amount of experiences (i.e., training data). Typically, a manipulative robot operates on external objects by using its own hands (or similar end-effectors), but in some cases the use of tools may be desirable, nevertheless, it is reasonable to assume that while a robot can collect many sensorimotor experiences using its own hands, this cannot happen for all possible human-made tools. Therefore, in this paper we investigate the developmental transition from hand to tool affordances: what sensorimotor skills that a robot has acquired with its bare hands can be employed for tool use? By employing a visual and motor imagination mechanism to represent different hand postures compactly, we propose a probabilistic model to learn hand affordances, and we show how this model can generalize to estimate the affordances of previously unseen tools, ultimately supporting planning, decision-making and tool selection tasks in humanoid robots. We present experimental results with the iCub humanoid robot, and we publicly release the collected sensorimotor data in the form of a hand posture affordances dataset.


A Survey of Knowledge Representation and Retrieval for Learning in Service Robotics

arXiv.org Artificial Intelligence

Within the realm of service robotics, researchers have placed a great amount of effort into learning motions and manipulations for task execution by robots. The task of robot learning is very broad, as it involves many tasks such as object detection, action recognition, motion planning, localization, knowledge representation and retrieval, and the intertwining of computer vision and machine learning techniques. In this paper, we focus on how knowledge can be gathered, represented, and reproduced to solve problems as done by researchers in the past decades. We discuss the problems which have existed in robot learning and the solutions, technologies or developments (if any) which have contributed to solving them. Specifically, we look at three broad categories involved in task representation and retrieval for robotics: 1) activity recognition from demonstrations, 2) scene understanding and interpretation, and 3) task representation in robotics - datasets and networks. Within each section, we discuss major breakthroughs and how their methods address present issues in robot learning and manipulation.