Goto

Collaborating Authors

The Intelligent Hand: An Experimental Approach to Human-Object Recognition and Implications for Robotics and AI

AI Magazine

The information in this article was originally presented as a keynote invited talk by Susan Lederman at the Thirteenth International Joint Conference on Artificial Intelligence in Chambery, France; it is based primarily on a joint research program that we conducted. We explain how the scientific study of biological systems offers a complementary approach to the more formal analytic methods favored by roboticists; such study is also relevant to a number of classical problems addressed by the AI field. We offer an example of the scientific approach that is based on a selection of our experiments and empirically driven theoretical work on human haptic (tactual) object processing; the nature and role of active manual exploration is of particular concern. We further suggest how this program with humans can be modified and extended to guide the development of highlevel manual exploration strategies for robots equipped with a haptic perceptual system.


Seeing with the Hands and with the Eyes: The Contributions of Haptic Cues to Anatomical Shape Recognition in Surgery

AAAI Conferences

Medical experts routinely need to identify the shapes of anatomical structures, and surgeons report that they depend substantially on touch to help them with this process. In this paper, we discuss possible reasons why touch may be especially important for anatomical shape recognition in surgery, and why in this domain haptic cues may be at least as informative about shape as visual cues. We go on to discuss modern surgical methods, in which these haptic cues are substantially diminished. We conclude that a potential future challenge is to find ways to reinstate these important cues and to help surgeons recognize shapes in the restricted sensory conditions of minimally invasive surgery.


Learning efficient haptic shape exploration with a rigid tactile sensor array

arXiv.org Artificial Intelligence

Haptic exploration is a key skill for both robots and humans to discriminate and handle unknown or recognize familiar objects. Its active nature is impressively evident in humans which from early on reliably acquire sophisticated sensory-motor capabilites for active exploratory touch and directed manual exploration that associates surfaces and object properties with their spatial locations. In stark contrast, in robotics the relative lack of good real-world interaction models, along with very restricted sensors and a scarcity of suitable training data to leverage machine learning methods has so far rendered haptic exploration a largely underdeveloped skill for robots, very unlike vision where deep learning approaches and an abundance of available training data have triggered huge advances. In the present work, we connect recent advances in recurrent models of visual attention (RAM) with previous insights about the organisation of human haptic search behavior, exploratory procedures and haptic glances for a novel learning architecture that learns a generative model of haptic exploration in a simplified three-dimensional environment. The proposed algorithm simultaneously optimizes main perception-action loop components: feature extraction, integration of features over time, and the control strategy, while continuously acquiring data online. The resulting method has been successfully tested with four different objects. It achieved results close to 100% while performing object contour exploration that has been optimized for its own sensor morphology.


Verbal Assistance in Tactile-Map Explorations: A Case for Visual Representations and Reasoning

AAAI Conferences

Tactile maps offer access to spatial-analog information for visually impaired people. In contrast to visual maps, a tactile map has a lower resolution and can only be inspected in a sequential way, complicating the extraction of spatial relations among distant map entities. Verbal assistance can help to overcome these difficulties by substituting textual labels with verbal descriptions and offering propositional knowledge about spatial relations. Like visual maps, tactile maps are based on visual, spatial-geometric representations that need to be reasoned about in order to generate verbal assistance. We present an approach towards a verbally assisting virtual-environment tactile map (VAVETaM) realized on a computer system utilizing a haptic force-feedback device. In particular, we discuss the tasks of understanding the user's map exploration procedures (MEPs), of exploiting the spatial-analog map to anticipate the user's informational needs, of reasoning about optimal assistance by taking assumed prior knowledge of the user into account, and of generating appropriate verbal instructions and descriptions to augment the map.


The Boosting Effect of Exploratory Behaviors

AAAI Conferences

Active object exploration is one of the hallmarks of human and animal intelligence. Research in psychology has shown that the use of multiple exploratory behaviors is crucial for learning about objects. Inspired by such research, recent work in robotics has demonstrated that by performing multiple exploratory behaviors a robot can dramatically improve its object recognition rate. But what is the cause of this improvement? To answer this question, this paper examines the conditions under which combining information from multiple behaviors and sensory modalities leads to better object recognition results. Two different problems are considered: interactive object recognition using auditory and proprioceptive feedback, and surface texture recognition using tactile and proprioceptive feedback. Analysis of the results shows that metrics designed to estimate classifier model diversity can explain the improvement in recognition accuracy. This finding establishes, for the first time, an important link between empirical studies of exploratory behaviors in robotics and theoretical results on boosting in machine learning.